Hacker Newsnew | past | comments | ask | show | jobs | submit | halfcat's commentslogin

If the code is ever visible to anyone else ever, you have no guarantee. If it’s actually valuable, you have to protect it the same way you’d protect a pile of gold bars.

What does “my code...for my clients” mean (is it yours or theirs)? If it’s theirs let them house it and delegate access to you. If they want to risk it being, ahem...borrowed, that’s their business decision to make.

If it’s yours, you can host it yourself and maintain privacy, but the long tail risk of maintaining it is not as trivial as it seems on the surface. You need to have backups, encrypted, at different locations, geographically distant, so either you need physical security, or you’re using the cloud and need monitoring and alerting, and then need something to monitor the monitor.

It’s like life. Freedom means freedom from tyranny, not freedom from obligation. Choosing a community or living solo in the wilderness both come with different obligations. You can pay taxes (and hope you’re not getting screwed, too much), or you can fight off bears yourself, etc.


Society won’t delay reward now for future good on its own. Even if one person will, there’s a line of people who will step in to pollute the lake or kill the whales for a bag of money.

It will just decay until it’s a short squeeze into oligarchy or worse (the corrupt will be forced into an arms race of accelerating corruption as opportunity becomes scarce). Then some other country who isn’t leaving it up to their society to do the right thing will be in charge. Until the same happens to them.

This is the value of religion historically, one of the few ways of coercing a population into doing the right thing for their own good. But every group can be spoiled or hijacked by a small handful of bad actors who are willing to do what others are not.


A flawless predictor would indicate you’re in a simulation, but also we cannot even simulate multiple cells at the most fine-grained level of physics.

But also you’re right that even a pretty good (but not perfect) predictor doesn’t change the scenario.

What I find interesting is to change the amounts. If the open box has $0.01 instead of $1000, you’re not thinking ”at least I got something”, and you just one-box.

But if both boxes contain equal amounts, or you swap the amounts in each box, two-boxing is always better.

All that to say, the idea that the right strategy here is to ”be the kind of person who one-boxes” isn’t a universe virtue. If the amounts change, the virtues change.


A flawless predictor would indicate you’re in a simulation [...]

No, it does not. Replace the human with a computer entering the room, the predictor analyzes the computer and the software running on the computer when it enters. If the decision program does not query a hardware random source or some stray cosmic particle changes the choice, the predictor could perfectly predict the choice just by accurately enough emulating the computer. If the program makes any use of external inputs, say the image from an attached webcam, the predictor also needs to know those inputs well enough. The same could, at least in principle, work for humans.


I agree with you that it doesn't require that you are in a simulation, but a flawless predictor would be a strong indication that a simulation is possible, and that should raise our assumed probability that we're in a simulation.


I would think that the existence of a flawless predictor is probably more likely to indicate that memories of predictions, and any associated records, have been modified to make the predictor appear flawless.


I would say that if we presume memories of everyone involved have been modified, that is an equally strong predictor that we are in a simulation.


Where does this obsession with the simulation hypothesis come from, it has been so widespread in the last years? It is more or less pointless to think about it, it will not get you anywhere. You only know this universe, to some extend, but you have no idea what a real universe looks like and you have no idea what a simulated universe looks like, so you will never be able to tell which kind our universe is.

But what if we discover that our universe is made from tiny voxels or something like that, that will be undeniable evidence, right? Wrong! Who says that real universes are not made of tiny voxels? It could be [1] the other way around, maybe real universes are discrete but their universe simulations are continuous, in which case the lack of tiny voxels in our universe would be the smoking gun evidence for being in a simulation.

[1] This is meant as an example, I have no idea if one can actually come up with a discrete universe that admits continuous simulations, which probably should also be efficient in some sense.


That’s a great question and a very realistic thing for us to answer. There is definitely no increase in AI here. If you’d like, I can walk you through how the best posters arrive at this conclusion in the normal human way. Just say the word.


> we just need to make the spec perfect

So, never.

Greg Kroah-Hartman was once asked by his boss, ”when will Linux be done?” and he said, ”when people stop making new hardware”, that even today, when we assume the hardware won’t lie, much of the work in maintaining Linux is around hardware bugs.

So even at the lowest levels of software development, you can’t know the bugs you’re going to have until you partially solve the problem and find out that this combination of hardware and drivers produces an error, and you only find that out because someone with that combination tried it. There is no way to prevent that by “make better spec”.

But that’s always been true. Basically it’s the 3-body-problem. On the spectrum of simple-complicated-complex, you can calculate the future state of a system if it’s simple, or “only complicated” (sometimes), but you literally cannot know the future state of complex systems without simulating them, running each step and finding out.

And it gets worse. Software ranges from simple to complicated to complex. But it exists within a complex hardware environment, and also within a complex business environment where people change and interest rates change and motives change from month to month.

There is no “correct spec”.


The idea that people are going to YOLO changes to DNS and Postgres migrations gives me such anxiety, knowing the pain people are in for when they “point Claude at it, one prompt, and done”, then their business is dead in the water for a week or two while every executive is trying to micromanage the recovery.

I love Streamlit and mermaid, but if these are the shining examples this isn’t a good sign. These have hard ceilings and there’s only so much you can work around the model of “rerun the entire Python script every time the page changes”.

As long as humans are involved the UI will matter. Maybe the importance is not on the end-user facing UI, and maybe it’s more on the backend SRE-level observability UI that gives insight into whether the wheels are still on the bus, but it will matter.

Some people are getting the AI to handle that too, and like all demos, that will work until it doesn’t. I’m sure NASA or someone can engineer it well enough to work, but it’s always going to be a tradeoff: how fast you can go depends more on the magnitude of the crash you can survive, than the top speed someone achieves once and lives to tell about it.


> it allows our margins to be higher and our speed of implementation to be faster

Faster than what? You will be faster than your previous self, just like all of your competitors. Where’s the net gain here? Even if you somehow managed to capture more value for yourself, you’ve stopped providing value to 5-10x that many employees who are no longer employed.

When costs approach zero on a large scale, margins do not increase. Low costs = you’re not paying anyone = your competitors aren’t paying anyone = your customers no longer have money = your revenue follows your costs straight to zero.

Companies that provide physical services can’t scale without hiring. A one-man “crew” isn’t putting a roof on a data center.

I want to be wrong. Tell me why you think any of this is wrong.


I don't think you are wrong. I find many tech people/founders excited by AI don't understand end game economics in general. Like kids excited by the new toy starting their new startup they don't see the end game if this all plays out; or they are hopeful that they are the lucky ones.

Generally industries once they become a cheap commodity are at best cost based pricing. If you aren't charging to cost I will go to where it is; especially in a saturated market.

Ironically large corp, instead of tech companies, is probably where the SWE jobs of the future are at. Cost based pricing in cost based centre's. Creating own software with domain knowledge; rather than generic SaaS. Shared platforms will probably still have some value; but the value there isn't from the effort in code - more things like network effects, physical control, regulation, etc. Not an industry to get into anymore IMO -> AI is destroying SWE.

Software was always a means to an end; albeit an expensive way to get there that often paid off anyway at scale. The means is getting cheaper; the end remains.


But they can scale hiring


In my editor this looks like this, with an extension like Tailwind Fold or Inline Fold:

    <div class="...">
      <p class="...">
        Because Tailwind is so low-level, it never encourages you to design the same site twice. Some of your favorite sites are built with Tailwind, and you probably had no idea.
      </p>
    </div>


Ok, and how does it look when you want to read or edit the “classes”?


yeah, Tailwind feels to me like a "write-only" solution.


FWIW, “colocation in component-based architecture” doesn’t necessarily mean shared code. It can just mean the one thing has all of its parts in one place, instead of having HTML in one file, CSS in another, JS in another.

You’re right about DRY and code reuse very often being a premiere (wrong) abstraction, which is usually more of a problem than a few copy/pastes, because premature wrong abstractions become entrenched and harder to displace.


Cognitive load of looking at 12 open files trying to understand what’s happening. Well, in fairness some of those 12 are the same file because we have one part for the default CSS and then one for the media query that’s 900 lines further down the file.


Css modules gets away from the larger sins of the massive global css files.

If modules had existed much earlier it probably would’ve gotten rid of most of the awfulness.


CSS Modules are way older than Tailwind, but alas it was not enough


Oh, great. So let’s just 2x all our files then! All for, what exactly?

It sounds like you just want to write Java.


If you have a complaint about your styles being so complicated and in a giant 900 line mega file, I don’t see how you address physical size other than breaking up the file.

Granted, nesting support was also added fairly recently in the grand scheme of things, which boggles the mind given how it was such an obvious problem and solution that CSS preprocessing came about to address it.


What do you mean 12 files? It’s 2 files. One for your component and one for its styles module.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: