Hacker Newsnew | past | comments | ask | show | jobs | submit | more crimsonalucard's commentslogin

My company recently decided to switch over to microservices. It was a long an arduous process but we chose not to compromise.

Basically for the greatest amount of modularity, we divided all 400 functions in our monolithic application into 400 individual servers because obviously functions aren't modularizing everything enough. You really need to put more and more wrappers around all of your functions. First put a framework around your function, than put an http api layer around it, then wrap a server app around it, then put an entire container around it and boom! More wrappers == Less technical debt. To illustrate how this works see example below:

   wrapper(
     wrapper(
       wrapper(
         wrapper(
           f(x)
         )
       )
     )
   )
See that? Obviously for every additional wrapper you add around your original function your technical debt becomes less. This is why it makes sense to make to not just use functions to modularize your data but to wrap all your functions in containers and then put those containers in containers.

Now all 400 of our engineers each as an individual manages one entire function within one entire container. It's amazing, they don't have to think about two things anymore, they can just concentrate on one thing.

While I'm not sure what has improved yet, everything feels better. Our company is following industry trends and buzzwords. Technical debt actually went up but that's just our fault for building it wrong.

Some engineer asked me why couldn't we just load balance our original monolith and scale it horizontally. I fired that engineer.

Another engineer came to me and told me that for some function:

   func some_func(a: int, b: int) -> int: {
      return a + b
   }
It's probably better to do 1. instead of 2.

   1. Call the function: some_func(2,3)

   2. Make a request: request.get("http://www.pointlessapi.com/api/morecomplexity/someextratechnicaldebt/randomletters/asdfskeidk/some_func?a=2&a=3") 
   and parse the json:

   {
     "metadata": {
         "date": "1/1/2020"
         "request_id": "123323432",
         "other_pointless_crap": ...
         "more_usless_info": ...
         "function_name (why?)": "some_func"
     }
     "actual_data": 5
   }
I fired that engineer too. Obviously 2. is better than 1. Also why am I using JSON? Not enough wrappers! You have to wrap that http call in additional wrappers like GraphQL or GRPC!!! (See wrapper logic above).

Have you guys heard of a new trend called Sololithic architecture? Basically the new philosophy states that all of 400 of our microservices should be placed in 400 containers and run under a cluster of 399 computers under kubernetes!

I may not understand where technical debt comes from and I also may not understand how all these architectures will fix the problem of technical debt forever... I know that industry trends and buzzwords are more intelligent than me and monoliths are bad bad bad and obviously the source of all technical debt! Just cut everything into little pieces and technical debt becomes ZERO.

Right? The technical definition of Technical debt is not enough cutting of your logic into tiny pieces and not enough wrappers around your modules so all you need to do is cut everything up and put wrappers around it and problem solved! Makes sense!


You used words like "buzzwords" which makes you sound facetious. I know you aren't, but others could get the wrong idea. This is a very useful piece of advice that should be taught in CS 101.


The sad thing is, i was 1/3 into your post before it was clear that you were being cheeky. I've read several comments/articles on hn lately that were like this but serious.

Note: you forgot your container orchestration orchestrator. Also you should make clear that your CI/CD pipelines will also use the orchestration orchestrator pattern.


Yes, a good story teller weaves the details and themes into the plotline so the pacing never gets interrupted and the delivery feels natural.

Kojima interrupts the entire game with a character monologue about some tragic past or some modern geopolitical issue. Makes sense for every single boss to launch into 30 minutes talking about all this stuff right before a gun fight.


> Kojima interrupts the entire game with a character monologue about some tragic past or some modern geopolitical issue. Makes sense for every single boss to launch into 30 minutes talking about all this stuff right before a gun fight.

and then you had the extremely awkward scene in MGS5 in the back of a truck with Skullface where nobody talks for 20 minutes and you can't skip it, because they "forgot" to record a long monologue. So bad.


Kojima's games are not known for good story telling. All of them are down right bad. People who like it tend to not mind cringey and over the top melodramatic dialogue...

If you want to play a deep game with an immersive and engrossing story play planescape torment.


I mind the cringy, boring and melodramatic stuff, but I find that the stuff that does work in his games incredible enough to slog through it.

Planescape: Torment is on a very short list of games where some of the story or setting just randomly pops in my head again. Really one of the best.

The gameplay was pretty awful though, and I never really got into the incredibly cluttered Infinity Engine graphics. Would love a proper 3D remake that either improves the gameplay or makes it even less relevant.


Gamecube is well emulated. So you can play Twin Snakes on the PC.

Honestly though, it doesn't hold up well, and even as revolutionary as it was for it's time the dialogue was still often cringy.


I don't have a problem with the dialogue but Twin Snakes had all of the cutscenes totally re-directed from the original Playstation version. The cutscenes in Twin Snakes feature ridiculous stunts and out-of-character overreactions.

I consider the gameplay to be superior, but it might not be worth it when it ruins the game's storytelling.


No way man. He makes great games but he adds a lot of cinematics with cringey dialogue and an overt the top story to all of his games.

The man isn't perfect. I feel a game like red dead redemption 2 is more closer to that perfect game we're looking for. It hits all the the marks in terms of story and systems based gameplay.


You're not wrong - Kojima games are just as much film as they are video game.

But as someone who is inherently drawn to over the top narrative-based games (see Persona 5, FF7, Detroit: Become Human etc.) I can't help but feel like his style of gaming is the future of narrative single-player experiences.


RDR2 blew me away. It’s a masterpiece.


Even at the time it came out, mgs1, was cringeworthy.

Definitely revolutionary for it's time, but the amount of dialogue and cinematic flair did not match up with over the top dialogue.

It's just kind of not immersive to have a 30 minute conversation and in depth look into every bosses tragic past before getting into a boss fight.

Good writers/Game designers introduce all of the above in a more natural way.

I also feel a lot of it is just culture. Japanese cinematics tend to be like that maybe? I feel subtitles without dubbing tends to hide a lot of the cringeyness of the whole situation. For example the original FF7 actually works well because it's all text.


That's because you haven't stepped outside of the box. Most of the programming languages you learn are derived from Algol. So JAVA, C, C++, python, javascript are all essentially permutations of the same thing.

There's other families of languages like Prolog, ML and lisp that are drastically different and I have found that many people find these languages hard to learn.


While you might be right, I've seen the same comments about people who did a thorough functional programming class and then having to learn another functional programming language later on.

Moreover, you don't know if the parent also did functional programming.

But I think your remark is a fair one, I think it's about wrapping your head around programming paradigms that are quite far from each other.


Here's Peter Norvig's advice about different paradigms:

Learn at least a half dozen programming languages. Include one language that emphasizes class abstractions (like Java or C++), one that emphasizes functional abstraction (like Lisp or ML or Haskell), one that supports syntactic abstraction (like Lisp), one that supports declarative specifications (like Prolog or C++ templates), and one that emphasizes parallelism (like Clojure or Go).

(https://norvig.com/21-days.html)


I wasn't just talking about functional. Prolog is a different paradigm all together.


This could probably be extrapolated further to say all programming on von Neumann architecture is fundamentally similar.


Actually no it can't. That be like extrapolating all expressions of language are fundamentally similar.


How does an incompetent engineer tell the difference between incompetence and imposter syndrome?

I don't think it's realistic to say that all incompetent engineers think they're competent.


It's always easy to refine an abstract idea, such is trivial.

For implementing and maintaining designs, especially in software, the squiggle goes in the opposite direction.


The squiggle is meant to explore a particular design paradigm, where the mess is the research, and things start calming down as research is synthesized, constraints are applied, a design is created and tested, and finally an iterated version is delivered. It assumes surprises but suppresses the height of ups and downs in the middle 50%.


Yes, I know. I am saying refining an idea is done all the time. But you have to realize that designers are only dealing with an abstract idea. It is easy to cut a thing in half and reconfigure components of that thing into abstract perfection when that thing (aka Design) lives only in your imagination.

Such is not the case for an idea that has already been materialized. There are high costs to modifying an idea that is already implemented.

The cost is so high that often developers don't reconfigure the product from scratch, what they do is build patches, grafts and superficial additions on top of the core product so that the implementation of the idea looks like the actual idea.

Like patching up an old car, eventually the accumulated ugliness reaches an apex and the core product must be rebuilt from scratch. I would say for the vast majority of projects implemented in the real world, the squiggle is in the opposite direction.

Most Designers tend to not understand the realities of what happens to their designs in production. The design of a system should not end at implementation. Implementation is just the beginning of the journey.

The best designers are the ones that can create designs that account for the inevitable degradations that happen to an idea that has already been crystallized.

The best designers are ones that can modify a design to take into account current limitations of technology and flaws while the idea is being executed in production.


Indeed, somethings it begins with a clear statement of the goal.. then gets messy.


If you zoom in enough at the start, it looks like a straight line.


In a certain sense complexity CAN be destroyed. Yes there is a minimal level of complexity that must exist, but given an entity with a purpose, in many cases it is possible to reduce the complexity of that entity without changing its' purpose.

Take for example this function which adds one to an integer:

   function addOne(x: int) -> int {
       return (x * 1) + 0 - x + x + 1;
   }
can be reduced to:

   function addOne(x: int) -> int {
       return x + 1;
   }

The examples are obvious but from the examples you can deduce that their are many instances in software engineering where such reductions can exist but are not obvious at all.


I phrase it as "The solution can be as simple as the problem but no simpler", which means, you can "create" complexity. We see this all the time in code, after all, we must account for this.

But you can't make the solution simpler than the problem. There's kind of a "by definition" aspect to this, so it's kinda circular, but it's still a valuable observation, because we also see this in code all the time. For instance, it plays a significant role in the leakiness of abstractions; that leakiness is typically the attempt to make a solution that is simpler than the entire problem space. You can be fine as long as your particular problem fits in the space of the leaky solutions, but once your problem pokes out of that you can be in big trouble.


Yes!

> you can "create" complexity

AKA "the complicator's gloves" https://thedailywtf.com/articles/The_Complicator_0x27_s_Glov...


I would rephrase this as “the solution can be as simple as your model of the problem but no simpler.” A better model, structure, implementation, or restatement of the problem could transform the dimensions of the problem space radically.

But I do agree with you in spirit of what you said, that there is some essential “hardness” of the problem that is part-and-parcel with the difficulty in stating or explaining the problem. I see a similar complexity in representing the problem in code or functions or algorithms etc. Then comes a related issue with reducibility; some concepts or problems cannot be broken down into smaller discrete parts. They are already as simple as they can be. This is ideal in a certain sense, in that the concepts can be explored with fewer assumptions. However when someone asks you to explain like they’re 5 years old, interpretation and perception starts to creep in through intuitions and abstractions.


> I phrase it as "The solution can be as simple as the problem but no simpler"

Even that can be deceptive. For example, I can state very simply:

    I want to move my house 3 inches to the left
It's a very simple problem right? How complex do you think the minimally simple viable solution is?

It's about how the problem maps onto the constraints of implementation.


I am certain this is some kind of fallacy, since "house" is an abstraction that would need to be broken down somehow. The result would not be a simple problem anymore. Sorry, no fancy words, just common sense.


It's not that simple. For one we don't have a formal definition of "simple" and secondly, even when we do have a definition of "simple" for certain contexts, we can never prove that something is at its' "simplest" state or not at its' "simplest" state.

For example, newtons laws of motion, seemingly is a simple solution to finding a formal method of describing motion. It seems to fit the problem but there is no way to verify that it is the simplest solution to the problem and we don't even really know what "simple" is.

We know through experimentation that relativity fits observations more accurately but even with relativity we don't know if it's the most "simple" solution or even if its' fully correct.

Because you cannot know or ever know whether a solution is at its' simplest form, there is ALWAYS an opportunity for a reduction to be discovered.

You absolutely cannot say that "The solution can be as simple as the problem but no simpler" because we don't know what "simple" is and we don't know whether or not a "simpler" solution may be discovered in the future.


Well, to use your x + 1 example, we can say that one is simpler than the other, even if we can't define "simple" in exact terms, or even measure it precisely.

(By the way, I just noticed that your first sentence is perfect for this topic. I suspect that it was accidental, but it's beautiful.)

So I think that simplicity is perhaps somewhat like encoding in Shannon's information theory. Even if you can calculate the minimum number of bits required to represent something, you can't necessarily devise an encoding that will represent it in that number of bits. The analogy isn't perfect, but it has the same feel of "pushing toward something that you can't actually reach". (I guess you said the same thing with your newton's laws analogy, so I'm kind of repeating your point here.)

But I disagree with your last paragraph. Even if we can't measure "simple", we can still kind of feel when complexity is there. If the user needs to be able to do 50 things, then you either need a program that lets the user do 50 things, or you need 50 programs (or something in between). If your program is going to let the user do 50 things, then there has to be a way for the user to specify which of those 50 things they want to do, and the code has to be able to do all 50 things. You might be able to come up with some commonalities, a unified framework to simplify things somewhat, but that can only take you so far. You're still going to have to have 50 buttons on the GUI, or 50 command-line parameters that you accept, or 50 different URLs that you interpret, or whatever.

Or, you can have 50 separate programs, and each of them can be much simpler. But then you have 50 programs, and the user has to figure out which one to use for each thing, and remember them all.

So I claim that you can kind of tell that it's there, even if you can't measure it, and that's good enough for the statement to be a useful thing to keep in mind, even if it's not something we can measure or calculate.


Just because it lacks a formal definition does not mean that somebody cannot say it. It's a human notion, not a formal one.

(Also: I'd suggest that it's pretty rude to tell somebody they "absolutely cannot say" something, and try to avoid telling you that you absolutely cannot say that somebody absolutely cannot say something)


Human notions and formal notions overlap. All formal notions are human because they are developed by humans but not all human notions are formal because formal implies internal consistency and not all humans are rational.

The difference between the two sets: Human intuition and Formal notions is the set of all notions that are contradictory or irrational. Humans can be contradictory or irrational but we should try to avoid it.

>(Also: I'd suggest that it's pretty rude to tell somebody they "absolutely cannot say" something, and try to avoid telling you that you absolutely cannot say that somebody absolutely cannot say something)

Maybe a better way to put it is: "Absolutely logically impossible' rather than "you absolutely cannot say." No offense intended.


There are formal definitions of complexity, of which Kolmogorov complexity is perhaps the best known.

This may not be the sort of complexity you are thinking of (perhaps you have in mind 'hard to understand', which is an important issue that is not necessarily captured by formal measures), and formal metrics measure solutions, not problems (or, at best, a specific expression of a problem.) We cannot often decide whether a given solution is the simplest possible, but there are several fallacies that do not follow from this fact, such as:

1) Therefore, there cannot be a simplest solution.

2) Therefore, there is ALWAYS an opportunity for a reduction to be discovered.

[1] https://en.wikipedia.org/wiki/Kolmogorov_complexity


>1) Therefore, there cannot be a simplest solution.

I never said this, I said that you cannot verify that the current solution you have is the simplest solution. This is based off of science, nothing can be verified science. Because we treat software more as black box experiments rather than formal systems, verification of your solution as the simplest solution is impossible until you define formal axiomatic rules. But again, this is basically never done and Even than you'd have to deal with Godels incompleteness.

>2) Therefore, there is ALWAYS an opportunity for a reduction to be discovered.

There is always opportunity in the sense of "possibility" if a solution hasn't been formally verified to be its' simplest. 2 follows from the notion that complexity cannot be verified.

> [1] https://en.wikipedia.org/wiki/Kolmogorov_complexity

From your wikipedia source:

"The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem. In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section § Chaitin's incompleteness theorem); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts."

The above states that the exact Kolmogorov complexity can never be computed for all texts and that this fact has been formally verified axiomatically.

Doesn't this support my overall point? If you can't verify the complexity of a piece of text than you can't verify if it's the least complex because you don't even know the complexity let alone whether its' the least complex.


It is not entirely clear what your original point was, as it apparently involves being certain that something is possible. Are you sure?

The possibility of there being a simpler solution does not justify your claim "there is ALWAYS an opportunity for a reduction to be discovered" (and you specifically capitalized ALWAYS so we wouldn't misunderstand you!)

It might not be what you meant, but it is what you wrote, and apparently what you are now trying to justify.

Furthermore, you were originally objecting to the statement that "the solution can be as simple as the problem but no simpler", but this is a reasonable statement given that, as we both seem to agree, there is no formal metric that captures the sort of complexity we are talking about here. Not being able to settle the issue formally would not mean that we cannot discuss it usefully, and a concept of complexity in which solutions can be simpler than problems would be a strange, and probably not useful, definition.

As others have pointed out, Fred Brooks wrote about the very useful, but informal, concepts of essential and accidental complexity, to make an important point. In being pedantic about the lack of a formal definition of such terms, you are avoiding (or, at least, unnecessarily complicating) the sort of issues he was discussing (and we are here).


>It is not entirely clear what your original point was. The possibility of there being a simpler solution does not justify your claim "there is ALWAYS an opportunity for a reduction to be discovered" (and you specifically capitalized ALWAYS so we wouldn't misunderstand you!)

Let me clarify. If simplicity of a solution cannot be verified then a probability of the existence of a simpler solution will ALAWYS be greater than zero. This is in line with what I wrote but hopefully more clear.

>Furthermore, you were originally objecting to the statement that "the solution can be as simple as the problem but no simpler", but this is a reasonable statement given that, as we both seem to agree,

No I don't agree. The statement is easily disproven with an example of a problem that is more complex than the solution:

  Problem: reduce -> 1*2*3*4*5*6*7*2*38*3432*0*232*22*33453*221*334/2*3929330:

  Solution: 0. 
>there is no formal metric that captures the sort of complexity we are talking about here

I disagree, fuzzy human intuition can always be formalized given enough thought. If the intuition is flawed and irrational, formalizations of such intuitions will make this evident and further our overall understanding.

But either way even with this fuzzy definition of complexity my example of Newtons laws of motion, seemingly the simplest solution to the laws of motion was not only not the most general or simple solution but ultimately "less correct" than relativity. It always seems as if it's the simplest solution but you actually never know. This is in line with my experience.

>In being pedantic about the lack of a formal definition of such terms, you are avoiding (or, at least, unnecessarily complicating) the sort of issues he was discussing (and we are here).

I'm not trying to be pedantic. I am just trying to say that there are tons of instances where something looks like it's the simplest solution but it is actually not and you can never really know either.

Formal proof that something cannot ever be verified to be in it's most simplest form is just the ultimate supporting pillar. We can talk in terms of opinions and vague human definitions but this kind of argument leads to conversations that are never ending. A formal proof moves the argument towards a final resolution. I actually wasn't expecting a formal proof to be introduced into this argument, but it was introduced from your wikipedia source.

Before you introduced that source, I always knew that verifying that a solution is the simplest possible solution is an impossible endeavor in terms of science. Your source introduced to me that it is also an impossible endeavor in terms of logic.

Of course the strategy to win an argument that has already been verified by formal proof is to move the argument out of the realm of formal proof which you are doing now. It's a valid strategy but I still disagree, even on those fuzzy informal grounds.

So to keep in line with the overall topic. Can more complexity always be destroyed for a given solution? Can complexity always be reduced for a given statement?

The formal answer is: We can absolutely never know.


The thing is, prior to Gödel, one might have said "a probability of the existence of a proof of the decidability of Peano arithmetic will ALAWYS be greater than zero", and, by your argument here, that would have been correct - but, of course, one would have been mistaken.

It is your persistent and insistent use of the word 'ALAWYS' that is complicating things, and things would have been simpler if you had written of possibilities rather than certainties in the first place. You have written a great deal arguing for something that is not in doubt.

>>Furthermore, you were originally objecting to the statement that "the solution can be as simple as the problem but no simpler", but this is a reasonable statement given that, as we both seem to agree,

>No I don't agree. The statement is easily disproven with an example of a problem that is more complex than the solution:

  >Problem: reduce -> 1*2*3*4*5*6*7*2*38*3432*0*232*22*33453*221*334/2*3929330:

  >Solution: 0.
(By the way, it is rather amusing how you cut off my sentence in the middle in order to give it a meaning that the original does not have!)

Here we have an additional confusion. The solution to a travelling salesman problem is just an ordered list of vertices, but doing what you are doing here - just writing down the solution - completely misses the point.

> So to keep in line with the overall topic. Can more complexity always be destroyed for a given solution? Can complexity always be reduced for a given statement?

> The formal answer is: We can absolutely never know.

It is probably just as well that your "Solution: 0" example is beside the point, for if it were not, I would doubt your assertion that we can absolutely never know whether more complexity could always be destroyed for this given solution.


>The thing is, prior to Gödel, one might have said "a probability of the existence of a proof of the decidability of Peano arithmetic will ALAWYS be greater than zero", and, by your argument here, that would be correct - but, of course, one would have been mistaken.

There is a difference you're not seeing. The wikipedia article introduces the fact that it has been Proven that the complexity of a solution can Never be verified. This is equivalent to saying: "It has absolutely been proven that the probability of the existence of a simpler solution is greater than 0"

This is entirely different from what you're saying. It has never been proven that decidability of Peano arithmetic will ALAWYS be greater than zero. We can only make your statement given lack of knowledge; given more knowledge the statement may be untrue. My statement is saying that there is no amount of knowledge in the universe that will allow us to know whether a solution is in its' simplest state ever.

>It is your persistent and insistent use of the word 'ALAWYS' that is complicating things, and things would have been simpler if you had written of possibilities rather than certainties in the first place. You have written a great deal arguing for something that is not in doubt.

Then I clarified it when you specified that it wasn't clear. The reason why I used the term ALWAYS was to emphasize that it is PROVEN to be greater than zero. You didn't like it and said so and then I clarified what I meant.

>(By the way, it is rather amusing how you cut off my sentence in the middle in order to give it a meaning that the original does not have!)

It's not my intention to do this. If you feel it was done, just point it out and I will address it accordingly. If I detect any further malice in your replies I will end this conversation.

>Here we have an additional confusion. The solution to a travelling salesman problem is just an ordered list of vertices, but doing what you are doing here - just writing down the solution - completely misses the point.

That's a valid point. If we called "0" the "conclusion," than shouldn't the "solution" include the computations to arrive at the "conclusion"?

If so I would say that it the computations in the "solution" are still simpler than the problem overall:

   problem = "1*2*3*4*5*6*7*2*38*3432*0*232*22*33453*221*334/2*3929330"

   def solve(problem):
      return 0 if '0' in problem else eval(problem)
If you look carefully at the computational steps of "solve." One branch should do less calculations then the total amount of numbers in the problem. Therefore even the computational steps are smaller/simpler than the total size of the problem.

This is equivalent to saying that there exists problems of size N where the Big Oh complexity of the solution is of O(M) where M < N, which is certainly true. I know I sort of slaughtered the meaning of Big-Oh here but you get it.

>It is probably just as well that your "Solution: 0" example is beside the point, for if it were not, I would doubt your assertion that we can absolutely never know whether more complexity could always be destroyed for this given solution.

I get your point but can you prove it?


The whole point of Kolmogorov complexity is precisely the concept of a minimal program for solving problems of a certain type. The uncomputability of Kolmogorov complexity is beside the point here, just as Turing's halting theorem does not prove that it is always possible that any given program will fail to halt.

> One branch should do less calculations then the total amount of numbers in the problem.

Judging the complexity of a solution by how it responds to a subset of problems is the same sort of thing as just stating the solution, as that subset is the only set of problems that we are considering. In this case, that set is "products of a set of numbers including at least one zero."


>The uncomputability of Kolmogorov complexity is beside the point here, just as Turing's halting theorem does not prove that it is always possible that any given program will fail to halt.

I never implied this. First off the halting problem only proves that you can never CALCULATE or PROVE whether a given program will fail to halt. It does not prove that it will always fail to halt or never fail to halt. I never said or implied otherwise.

The uncomputability of Kolmogorov means you can never CALCULATE or PROVE the simplest form of a program. It does not mean that such a form doesn't exist nor does it mean that we haven't randomly found such a form for a specific program. It just means that even if we have found something that we think is the the Kolmogorov complexity of a given program we can never verify it to be true.

This fact is sufficient to show the following can never proven be true:

"There exists a program where the Complexity can never be reduced or destroyed"

>Judging the complexity of a solution by how it responds to a subset of problems is the same sort of thing as just stating the solution, as that subset is the only set of problems that we are considering. In this case, that set is "products of a set of numbers including at least one zero."

But Kolmogorov complexity calculates exactly what you don't want to judge. See the definition:

>In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output.

From the definition you need a predefined input from the set of all inputs and a predefined language from the set of all languages.

In terms of algorithmic complexity theory, I can define a problem to be as narrow or as large as I want. The domain for the previous problem can the set of sets of numbers that contain one or more zeros.

Subsets from larger problems can be used to create other problems and those problems and all problems in general can be used to make statements about complexity all together. If a subset shows something is not True then you have shown that you cannot make that blanket statement about the subset or the parent set.


>I never implied this... I never said or implied otherwise.

This is a very odd way of responding. When someone makes a point, it does not necessarily carry the implication that you said otherwise.

> First off the halting problem only proves that you can never CALCULATE or PROVE whether a given program will fail to halt.

It does not prove that. There are programs that one can prove will halt, and also others that you can prove will not halt.

> This fact is sufficient to show the following can never proven be true: "There exists a program where the Complexity can never be reduced or destroyed"

This seems to be demonstrating the same sort of misunderstanding as over what the halting theorem proves. Even where we cannot compute the Kolmogorov complexity, it does not follow that there is not a minimal one. As I mentioned before, the whole idea of Kolmogorov complexity is predicated on there being programs of minimal complexity.

Going back to that Wikipedia article, we see that "The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one."


>This is a very odd way of responding. When someone makes a point, it does not necessarily carry the implication that you said otherwise.

First phrase was in response to your statement. Second phrase was a followup to my sentence that you dotted out (...). And you accused me of modifying your wording with bad intentions.

>It does not prove that. There are programs that one can prove will halt, and also others that you can prove will not halt.

Semantic error, The halting problem only proves that a program can never CALCULATE or PROVE that a given program will fail to halt. That was what I meant.

>This seems to be demonstrating the same sort of misunderstanding as over what the halting theorem proves. Even where we cannot compute the Kolmogorov complexity, it does not follow that there is not a minimal one. As I mentioned before, the whole idea of Kolmogorov complexity is predicated on there being programs of minimal complexity.

>Going back to that Wikipedia article, we see that "The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one."

No. Not a demonstration of misunderstanding of halting. It is a demonstration of misunderstanding of Kolmogorov as you only just introduced me to it today.

Either way given any code, you cannot determine whether it is the most optimal one even if there Does exist an optimal one as the theorem states. So the intent still stands, let me modify the statement to get the semantics correct while preserving the intent:

"For any program you can determine whether Complexity for that program can be reduced or destroyed"

The above statement is False.

So in all practicality even though a simplified statement always exist we can never know if we found it. So the possibility is always there and the intent of what I said is still preserved if not stronger , see initial reply of this post:

"The examples are obvious but from the examples you can deduce that their are many instances in software engineering where such reductions can exist but are not obvious at all."

Kolmogorov still offers axiomatic support for my statement. In fact it makes the statement stronger and changes it too:

"In all instances of software engineering reductions can exist, you can never determine otherwise."

I can't say for sure but I believe that the statement above applies not only to Kolmogorov complexity, but algorithmic complexity as well.


Excellent! you do indeed have a stronger statement, now that various misunderstandings have been fixed.


I take your point, but the complexity I’m talking about is what remains after you’ve done any readily applicable simplifications.

Any normal task has an inherent level of complexity (“add one” is a very simple task, so can be reduced down as you point out). “Invert a matrix” is somewhat less reducible. “Calculate a quaternion” even less so, etc.

We generally string dozens or hundreds of these simpler tasks into sequences to get what we want, and human understanding being what it is, there’s quite a difference in conceptual complexity between “fly the plane to [here]” and doing all that maths.

There’s probably a relationship to subitising here as well, the brain does seem to be able to process small groups of things, even if it knows those things represent larger concepts, better than larger groups of things. This is just speculation on my part though.


I got into a really deep conversation about this here:

https://news.ycombinator.com/item?id=23042796

You may have seen it, it's in the same thread of conversation we're having.

Basically there's actually axiomatic proof that you can never know if you applied all possible applicable simplifications. Therefore you can never know whether or not more complexity can be destroyed or reduced.


With respect, I’m not seeing how your first addOne example returns an integer one greater than its integer argument.

Am I missing something?


  (x * 1) + 0 - x + x + 1
  x + 0 - x + x + 1
  x - x + x + 1
  0 + x + 1
  x + 1


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: