Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a certain sense complexity CAN be destroyed. Yes there is a minimal level of complexity that must exist, but given an entity with a purpose, in many cases it is possible to reduce the complexity of that entity without changing its' purpose.

Take for example this function which adds one to an integer:

   function addOne(x: int) -> int {
       return (x * 1) + 0 - x + x + 1;
   }
can be reduced to:

   function addOne(x: int) -> int {
       return x + 1;
   }

The examples are obvious but from the examples you can deduce that their are many instances in software engineering where such reductions can exist but are not obvious at all.


I phrase it as "The solution can be as simple as the problem but no simpler", which means, you can "create" complexity. We see this all the time in code, after all, we must account for this.

But you can't make the solution simpler than the problem. There's kind of a "by definition" aspect to this, so it's kinda circular, but it's still a valuable observation, because we also see this in code all the time. For instance, it plays a significant role in the leakiness of abstractions; that leakiness is typically the attempt to make a solution that is simpler than the entire problem space. You can be fine as long as your particular problem fits in the space of the leaky solutions, but once your problem pokes out of that you can be in big trouble.


Yes!

> you can "create" complexity

AKA "the complicator's gloves" https://thedailywtf.com/articles/The_Complicator_0x27_s_Glov...


I would rephrase this as “the solution can be as simple as your model of the problem but no simpler.” A better model, structure, implementation, or restatement of the problem could transform the dimensions of the problem space radically.

But I do agree with you in spirit of what you said, that there is some essential “hardness” of the problem that is part-and-parcel with the difficulty in stating or explaining the problem. I see a similar complexity in representing the problem in code or functions or algorithms etc. Then comes a related issue with reducibility; some concepts or problems cannot be broken down into smaller discrete parts. They are already as simple as they can be. This is ideal in a certain sense, in that the concepts can be explored with fewer assumptions. However when someone asks you to explain like they’re 5 years old, interpretation and perception starts to creep in through intuitions and abstractions.


> I phrase it as "The solution can be as simple as the problem but no simpler"

Even that can be deceptive. For example, I can state very simply:

    I want to move my house 3 inches to the left
It's a very simple problem right? How complex do you think the minimally simple viable solution is?

It's about how the problem maps onto the constraints of implementation.


I am certain this is some kind of fallacy, since "house" is an abstraction that would need to be broken down somehow. The result would not be a simple problem anymore. Sorry, no fancy words, just common sense.


It's not that simple. For one we don't have a formal definition of "simple" and secondly, even when we do have a definition of "simple" for certain contexts, we can never prove that something is at its' "simplest" state or not at its' "simplest" state.

For example, newtons laws of motion, seemingly is a simple solution to finding a formal method of describing motion. It seems to fit the problem but there is no way to verify that it is the simplest solution to the problem and we don't even really know what "simple" is.

We know through experimentation that relativity fits observations more accurately but even with relativity we don't know if it's the most "simple" solution or even if its' fully correct.

Because you cannot know or ever know whether a solution is at its' simplest form, there is ALWAYS an opportunity for a reduction to be discovered.

You absolutely cannot say that "The solution can be as simple as the problem but no simpler" because we don't know what "simple" is and we don't know whether or not a "simpler" solution may be discovered in the future.


Well, to use your x + 1 example, we can say that one is simpler than the other, even if we can't define "simple" in exact terms, or even measure it precisely.

(By the way, I just noticed that your first sentence is perfect for this topic. I suspect that it was accidental, but it's beautiful.)

So I think that simplicity is perhaps somewhat like encoding in Shannon's information theory. Even if you can calculate the minimum number of bits required to represent something, you can't necessarily devise an encoding that will represent it in that number of bits. The analogy isn't perfect, but it has the same feel of "pushing toward something that you can't actually reach". (I guess you said the same thing with your newton's laws analogy, so I'm kind of repeating your point here.)

But I disagree with your last paragraph. Even if we can't measure "simple", we can still kind of feel when complexity is there. If the user needs to be able to do 50 things, then you either need a program that lets the user do 50 things, or you need 50 programs (or something in between). If your program is going to let the user do 50 things, then there has to be a way for the user to specify which of those 50 things they want to do, and the code has to be able to do all 50 things. You might be able to come up with some commonalities, a unified framework to simplify things somewhat, but that can only take you so far. You're still going to have to have 50 buttons on the GUI, or 50 command-line parameters that you accept, or 50 different URLs that you interpret, or whatever.

Or, you can have 50 separate programs, and each of them can be much simpler. But then you have 50 programs, and the user has to figure out which one to use for each thing, and remember them all.

So I claim that you can kind of tell that it's there, even if you can't measure it, and that's good enough for the statement to be a useful thing to keep in mind, even if it's not something we can measure or calculate.


Just because it lacks a formal definition does not mean that somebody cannot say it. It's a human notion, not a formal one.

(Also: I'd suggest that it's pretty rude to tell somebody they "absolutely cannot say" something, and try to avoid telling you that you absolutely cannot say that somebody absolutely cannot say something)


Human notions and formal notions overlap. All formal notions are human because they are developed by humans but not all human notions are formal because formal implies internal consistency and not all humans are rational.

The difference between the two sets: Human intuition and Formal notions is the set of all notions that are contradictory or irrational. Humans can be contradictory or irrational but we should try to avoid it.

>(Also: I'd suggest that it's pretty rude to tell somebody they "absolutely cannot say" something, and try to avoid telling you that you absolutely cannot say that somebody absolutely cannot say something)

Maybe a better way to put it is: "Absolutely logically impossible' rather than "you absolutely cannot say." No offense intended.


There are formal definitions of complexity, of which Kolmogorov complexity is perhaps the best known.

This may not be the sort of complexity you are thinking of (perhaps you have in mind 'hard to understand', which is an important issue that is not necessarily captured by formal measures), and formal metrics measure solutions, not problems (or, at best, a specific expression of a problem.) We cannot often decide whether a given solution is the simplest possible, but there are several fallacies that do not follow from this fact, such as:

1) Therefore, there cannot be a simplest solution.

2) Therefore, there is ALWAYS an opportunity for a reduction to be discovered.

[1] https://en.wikipedia.org/wiki/Kolmogorov_complexity


>1) Therefore, there cannot be a simplest solution.

I never said this, I said that you cannot verify that the current solution you have is the simplest solution. This is based off of science, nothing can be verified science. Because we treat software more as black box experiments rather than formal systems, verification of your solution as the simplest solution is impossible until you define formal axiomatic rules. But again, this is basically never done and Even than you'd have to deal with Godels incompleteness.

>2) Therefore, there is ALWAYS an opportunity for a reduction to be discovered.

There is always opportunity in the sense of "possibility" if a solution hasn't been formally verified to be its' simplest. 2 follows from the notion that complexity cannot be verified.

> [1] https://en.wikipedia.org/wiki/Kolmogorov_complexity

From your wikipedia source:

"The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem. In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section § Chaitin's incompleteness theorem); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts."

The above states that the exact Kolmogorov complexity can never be computed for all texts and that this fact has been formally verified axiomatically.

Doesn't this support my overall point? If you can't verify the complexity of a piece of text than you can't verify if it's the least complex because you don't even know the complexity let alone whether its' the least complex.


It is not entirely clear what your original point was, as it apparently involves being certain that something is possible. Are you sure?

The possibility of there being a simpler solution does not justify your claim "there is ALWAYS an opportunity for a reduction to be discovered" (and you specifically capitalized ALWAYS so we wouldn't misunderstand you!)

It might not be what you meant, but it is what you wrote, and apparently what you are now trying to justify.

Furthermore, you were originally objecting to the statement that "the solution can be as simple as the problem but no simpler", but this is a reasonable statement given that, as we both seem to agree, there is no formal metric that captures the sort of complexity we are talking about here. Not being able to settle the issue formally would not mean that we cannot discuss it usefully, and a concept of complexity in which solutions can be simpler than problems would be a strange, and probably not useful, definition.

As others have pointed out, Fred Brooks wrote about the very useful, but informal, concepts of essential and accidental complexity, to make an important point. In being pedantic about the lack of a formal definition of such terms, you are avoiding (or, at least, unnecessarily complicating) the sort of issues he was discussing (and we are here).


>It is not entirely clear what your original point was. The possibility of there being a simpler solution does not justify your claim "there is ALWAYS an opportunity for a reduction to be discovered" (and you specifically capitalized ALWAYS so we wouldn't misunderstand you!)

Let me clarify. If simplicity of a solution cannot be verified then a probability of the existence of a simpler solution will ALAWYS be greater than zero. This is in line with what I wrote but hopefully more clear.

>Furthermore, you were originally objecting to the statement that "the solution can be as simple as the problem but no simpler", but this is a reasonable statement given that, as we both seem to agree,

No I don't agree. The statement is easily disproven with an example of a problem that is more complex than the solution:

  Problem: reduce -> 1*2*3*4*5*6*7*2*38*3432*0*232*22*33453*221*334/2*3929330:

  Solution: 0. 
>there is no formal metric that captures the sort of complexity we are talking about here

I disagree, fuzzy human intuition can always be formalized given enough thought. If the intuition is flawed and irrational, formalizations of such intuitions will make this evident and further our overall understanding.

But either way even with this fuzzy definition of complexity my example of Newtons laws of motion, seemingly the simplest solution to the laws of motion was not only not the most general or simple solution but ultimately "less correct" than relativity. It always seems as if it's the simplest solution but you actually never know. This is in line with my experience.

>In being pedantic about the lack of a formal definition of such terms, you are avoiding (or, at least, unnecessarily complicating) the sort of issues he was discussing (and we are here).

I'm not trying to be pedantic. I am just trying to say that there are tons of instances where something looks like it's the simplest solution but it is actually not and you can never really know either.

Formal proof that something cannot ever be verified to be in it's most simplest form is just the ultimate supporting pillar. We can talk in terms of opinions and vague human definitions but this kind of argument leads to conversations that are never ending. A formal proof moves the argument towards a final resolution. I actually wasn't expecting a formal proof to be introduced into this argument, but it was introduced from your wikipedia source.

Before you introduced that source, I always knew that verifying that a solution is the simplest possible solution is an impossible endeavor in terms of science. Your source introduced to me that it is also an impossible endeavor in terms of logic.

Of course the strategy to win an argument that has already been verified by formal proof is to move the argument out of the realm of formal proof which you are doing now. It's a valid strategy but I still disagree, even on those fuzzy informal grounds.

So to keep in line with the overall topic. Can more complexity always be destroyed for a given solution? Can complexity always be reduced for a given statement?

The formal answer is: We can absolutely never know.


The thing is, prior to Gödel, one might have said "a probability of the existence of a proof of the decidability of Peano arithmetic will ALAWYS be greater than zero", and, by your argument here, that would have been correct - but, of course, one would have been mistaken.

It is your persistent and insistent use of the word 'ALAWYS' that is complicating things, and things would have been simpler if you had written of possibilities rather than certainties in the first place. You have written a great deal arguing for something that is not in doubt.

>>Furthermore, you were originally objecting to the statement that "the solution can be as simple as the problem but no simpler", but this is a reasonable statement given that, as we both seem to agree,

>No I don't agree. The statement is easily disproven with an example of a problem that is more complex than the solution:

  >Problem: reduce -> 1*2*3*4*5*6*7*2*38*3432*0*232*22*33453*221*334/2*3929330:

  >Solution: 0.
(By the way, it is rather amusing how you cut off my sentence in the middle in order to give it a meaning that the original does not have!)

Here we have an additional confusion. The solution to a travelling salesman problem is just an ordered list of vertices, but doing what you are doing here - just writing down the solution - completely misses the point.

> So to keep in line with the overall topic. Can more complexity always be destroyed for a given solution? Can complexity always be reduced for a given statement?

> The formal answer is: We can absolutely never know.

It is probably just as well that your "Solution: 0" example is beside the point, for if it were not, I would doubt your assertion that we can absolutely never know whether more complexity could always be destroyed for this given solution.


>The thing is, prior to Gödel, one might have said "a probability of the existence of a proof of the decidability of Peano arithmetic will ALAWYS be greater than zero", and, by your argument here, that would be correct - but, of course, one would have been mistaken.

There is a difference you're not seeing. The wikipedia article introduces the fact that it has been Proven that the complexity of a solution can Never be verified. This is equivalent to saying: "It has absolutely been proven that the probability of the existence of a simpler solution is greater than 0"

This is entirely different from what you're saying. It has never been proven that decidability of Peano arithmetic will ALAWYS be greater than zero. We can only make your statement given lack of knowledge; given more knowledge the statement may be untrue. My statement is saying that there is no amount of knowledge in the universe that will allow us to know whether a solution is in its' simplest state ever.

>It is your persistent and insistent use of the word 'ALAWYS' that is complicating things, and things would have been simpler if you had written of possibilities rather than certainties in the first place. You have written a great deal arguing for something that is not in doubt.

Then I clarified it when you specified that it wasn't clear. The reason why I used the term ALWAYS was to emphasize that it is PROVEN to be greater than zero. You didn't like it and said so and then I clarified what I meant.

>(By the way, it is rather amusing how you cut off my sentence in the middle in order to give it a meaning that the original does not have!)

It's not my intention to do this. If you feel it was done, just point it out and I will address it accordingly. If I detect any further malice in your replies I will end this conversation.

>Here we have an additional confusion. The solution to a travelling salesman problem is just an ordered list of vertices, but doing what you are doing here - just writing down the solution - completely misses the point.

That's a valid point. If we called "0" the "conclusion," than shouldn't the "solution" include the computations to arrive at the "conclusion"?

If so I would say that it the computations in the "solution" are still simpler than the problem overall:

   problem = "1*2*3*4*5*6*7*2*38*3432*0*232*22*33453*221*334/2*3929330"

   def solve(problem):
      return 0 if '0' in problem else eval(problem)
If you look carefully at the computational steps of "solve." One branch should do less calculations then the total amount of numbers in the problem. Therefore even the computational steps are smaller/simpler than the total size of the problem.

This is equivalent to saying that there exists problems of size N where the Big Oh complexity of the solution is of O(M) where M < N, which is certainly true. I know I sort of slaughtered the meaning of Big-Oh here but you get it.

>It is probably just as well that your "Solution: 0" example is beside the point, for if it were not, I would doubt your assertion that we can absolutely never know whether more complexity could always be destroyed for this given solution.

I get your point but can you prove it?


The whole point of Kolmogorov complexity is precisely the concept of a minimal program for solving problems of a certain type. The uncomputability of Kolmogorov complexity is beside the point here, just as Turing's halting theorem does not prove that it is always possible that any given program will fail to halt.

> One branch should do less calculations then the total amount of numbers in the problem.

Judging the complexity of a solution by how it responds to a subset of problems is the same sort of thing as just stating the solution, as that subset is the only set of problems that we are considering. In this case, that set is "products of a set of numbers including at least one zero."


>The uncomputability of Kolmogorov complexity is beside the point here, just as Turing's halting theorem does not prove that it is always possible that any given program will fail to halt.

I never implied this. First off the halting problem only proves that you can never CALCULATE or PROVE whether a given program will fail to halt. It does not prove that it will always fail to halt or never fail to halt. I never said or implied otherwise.

The uncomputability of Kolmogorov means you can never CALCULATE or PROVE the simplest form of a program. It does not mean that such a form doesn't exist nor does it mean that we haven't randomly found such a form for a specific program. It just means that even if we have found something that we think is the the Kolmogorov complexity of a given program we can never verify it to be true.

This fact is sufficient to show the following can never proven be true:

"There exists a program where the Complexity can never be reduced or destroyed"

>Judging the complexity of a solution by how it responds to a subset of problems is the same sort of thing as just stating the solution, as that subset is the only set of problems that we are considering. In this case, that set is "products of a set of numbers including at least one zero."

But Kolmogorov complexity calculates exactly what you don't want to judge. See the definition:

>In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output.

From the definition you need a predefined input from the set of all inputs and a predefined language from the set of all languages.

In terms of algorithmic complexity theory, I can define a problem to be as narrow or as large as I want. The domain for the previous problem can the set of sets of numbers that contain one or more zeros.

Subsets from larger problems can be used to create other problems and those problems and all problems in general can be used to make statements about complexity all together. If a subset shows something is not True then you have shown that you cannot make that blanket statement about the subset or the parent set.


>I never implied this... I never said or implied otherwise.

This is a very odd way of responding. When someone makes a point, it does not necessarily carry the implication that you said otherwise.

> First off the halting problem only proves that you can never CALCULATE or PROVE whether a given program will fail to halt.

It does not prove that. There are programs that one can prove will halt, and also others that you can prove will not halt.

> This fact is sufficient to show the following can never proven be true: "There exists a program where the Complexity can never be reduced or destroyed"

This seems to be demonstrating the same sort of misunderstanding as over what the halting theorem proves. Even where we cannot compute the Kolmogorov complexity, it does not follow that there is not a minimal one. As I mentioned before, the whole idea of Kolmogorov complexity is predicated on there being programs of minimal complexity.

Going back to that Wikipedia article, we see that "The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one."


>This is a very odd way of responding. When someone makes a point, it does not necessarily carry the implication that you said otherwise.

First phrase was in response to your statement. Second phrase was a followup to my sentence that you dotted out (...). And you accused me of modifying your wording with bad intentions.

>It does not prove that. There are programs that one can prove will halt, and also others that you can prove will not halt.

Semantic error, The halting problem only proves that a program can never CALCULATE or PROVE that a given program will fail to halt. That was what I meant.

>This seems to be demonstrating the same sort of misunderstanding as over what the halting theorem proves. Even where we cannot compute the Kolmogorov complexity, it does not follow that there is not a minimal one. As I mentioned before, the whole idea of Kolmogorov complexity is predicated on there being programs of minimal complexity.

>Going back to that Wikipedia article, we see that "The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one."

No. Not a demonstration of misunderstanding of halting. It is a demonstration of misunderstanding of Kolmogorov as you only just introduced me to it today.

Either way given any code, you cannot determine whether it is the most optimal one even if there Does exist an optimal one as the theorem states. So the intent still stands, let me modify the statement to get the semantics correct while preserving the intent:

"For any program you can determine whether Complexity for that program can be reduced or destroyed"

The above statement is False.

So in all practicality even though a simplified statement always exist we can never know if we found it. So the possibility is always there and the intent of what I said is still preserved if not stronger , see initial reply of this post:

"The examples are obvious but from the examples you can deduce that their are many instances in software engineering where such reductions can exist but are not obvious at all."

Kolmogorov still offers axiomatic support for my statement. In fact it makes the statement stronger and changes it too:

"In all instances of software engineering reductions can exist, you can never determine otherwise."

I can't say for sure but I believe that the statement above applies not only to Kolmogorov complexity, but algorithmic complexity as well.


Excellent! you do indeed have a stronger statement, now that various misunderstandings have been fixed.


I take your point, but the complexity I’m talking about is what remains after you’ve done any readily applicable simplifications.

Any normal task has an inherent level of complexity (“add one” is a very simple task, so can be reduced down as you point out). “Invert a matrix” is somewhat less reducible. “Calculate a quaternion” even less so, etc.

We generally string dozens or hundreds of these simpler tasks into sequences to get what we want, and human understanding being what it is, there’s quite a difference in conceptual complexity between “fly the plane to [here]” and doing all that maths.

There’s probably a relationship to subitising here as well, the brain does seem to be able to process small groups of things, even if it knows those things represent larger concepts, better than larger groups of things. This is just speculation on my part though.


I got into a really deep conversation about this here:

https://news.ycombinator.com/item?id=23042796

You may have seen it, it's in the same thread of conversation we're having.

Basically there's actually axiomatic proof that you can never know if you applied all possible applicable simplifications. Therefore you can never know whether or not more complexity can be destroyed or reduced.


With respect, I’m not seeing how your first addOne example returns an integer one greater than its integer argument.

Am I missing something?


  (x * 1) + 0 - x + x + 1
  x + 0 - x + x + 1
  x - x + x + 1
  0 + x + 1
  x + 1




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: