Even the unnecessary microcontrollers in modern devices irk me. A fridge does not need a microcontroller. (My issue is primarily repairability - discrete components can be sourced and replaced, microcontrollers with the correct programming usually cannot.)
The wifi devices are cheaper IME, so I created a IOT VLAN that does not have internet access, nor access to my normal network unless it was initiated from the trusted side.
Ikea home automation is all zigbee, and that stuff is dirt cheap. But if you want something ikea doesn’t sell, like wall switches and thermostats, wifi stuff does tend to be a hair cheaper.
A vlan sounds like the way to go to quarantine devices.
> Can't understand why manufacturers make this hard as it sells hardware.
Because a lot of features that cost a lot of money are only software limitations. With many of the cheaper cameras the max shutter speed and video capabilities are limited by software to make the distinction with the more expensive cameras bigger. So they do sell hardware - but opening up the software will make their higher-end offerings less compelling.
There is definitely a divide in users - those for which it works and those for which it doesn't. I suspect it comes down to what language and what tooling you use. People doing web-related or python work seem to be doing much better than people doing embedded C or C++. Similarly doing C++ in a popular framework like QT also yields better results. When the system design is not pre-defined or rigid like in QT, then you get completely unmaintainable code as a result.
If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.
While I agree that AI assisted coding probably works much better for languages and use cases that have a lot more relevant training data, when I read comments from people who like LLM assisted coding vs. those that don't, I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.
The primary difference I see in people who get the most value from AI tools is that they expect it to make mistakes: they always carefully review the code and are fine with acting, in some cases, more like an editor than an author. They also seem to have a good sense of where AI can add a lot of value (implementing well-defined functions, writing tests, etc.) vs. where it tends to fall over (e.g. tasks where large scale context is required). Those who can't seem to get value from AI tools seem (at least to me) less tolerant of AI mistakes, and less willing to iterate with AI agents, and they seem more willing to "throw the baby out with the bathwater", i.e. fixate on some of the failure cases but then not willing to just limit usage to cases where AI does a better job.
To be clear, I'm not saying one is necessarily "better" than the other, just that the reason for the dichotomy has a lot more to do with the programmers than the domain. For me personally, while I get a lot of value in AI coding, I also find that I don't enjoy the "editing" aspect as much as the "authoring" aspect.
If it takes 10x the time to do something, did you learn 10x as much? I don't mind repetition, I learned that way for many years and it still works for me. I recently made a short program using ai assist in a domain I was unfamiliar with. I iterated probably 4x. Iterations were based on learning about the domain both from the ai results that worked and researching the parts that either seemed extraneous or wrong. It was fast, and I learned a lot. I would have learned maybe 2x more doing it all from scratch, but I would have taken at least 10x the time and effort to reach the result, because there was no good place to immerse myself. To me, that is still useful learning and I can do it 5x before I have spent the same amount of time.
It comes back to other people's comments about acceptance of the tooling. I don't mind the somewhat messy learning methodology - I can still wind up at a good results quickly, and learn. I don't mind that I have to sort of beat the AI into submission. It reminds me a bit of part lecture, part lab work. I enjoy working out where it failed and why.
The fact is that most people skip learning about what works (learning is not cheap mentally). I’ve seen teammates just trying stuff (for days) until something kinda works instead of spending 30 mns doing research. The fact is that LLMs are good for producing something that looks correct, and waste the reviewer time. It’s harder to review something than writing it from scratch.
Learning is also exponential, the more you do it, the faster it is, because you may already have the foundations for that particular bit.
> I strongly get the impression that the difference has a lot more to do with the programmers than their programming language.
The problem with this perspective is that anyone who works on more niche programming areas knows the vast majority of programming discussion online aren't relevant to them. E.g., I've done macOS/iOS programming most of my career, and I now do work that's an order of magnitude more niche than that, and I commonly see programmers saying thing like "you shouldn't use a debugger", which is a statement that I can't imagine a macOS or iOS programmer saying (don't get me wrong they're probably out there, I've just never met or encountered one). So you just become use to most programming conversations being irrelevant to your work.
So of course the majority of AI conversations aren't relevant to your work either, because that's the expectation.
I think a lot of these conversations are two people with wildly different contexts trying to communicate, which is just pointless. Really we just shouldn't be trying to participate in these conversations (the more niche programmers that is), because there's just not enough shared context to make communication effective.
We just all happen to fall under this same umbrella of "programming", which gives the illusion of a shared context. It's true there's some things that are relevant across the field (it's all just variables, loops, and conditionals), but many of the other details aren't universal, so it's silly to talk about them without first understanding the full context around the other persons work.
> and I commonly see programmers saying thing like "you shouldn't use a debugger"
Sorry, but who TF says that? This is actually not something I hear commonly, and if it were, I would just discount this person's opinion outright unless there were some other special context here. I do a lot of web programming (Node, Java, Python primarily) and if someone told me "you shouldn't use a debugger" in those domains I would question their competence.
No one likes to hear it, but it comes down to prompting skill. People who are terrible at communicating and delegating complex tasks will be terrible at prompting.
It's no secret that a lot of engineers are bad at this part of the job. They prefer to work alone (i.e. without AI) because they lack the ability to clearly and concisely describe problems and solutions.
This. I work with juniors who have no idea what a spec is, and the idea of designing precisely what a component should do, especially in error cases, is foreign to them.
> If you are writing code that is/can be "heavily borrowed" - things that have complete examples on Github, then an LLM is perfect.
I agree with the general premise. There is however more to it than "heavily borrowed". The degree to which a code base is organized and structured and curated plays as big of a role as what framework you use.
If your project is a huge pile of unmaintainable and buggy spaghetti code then don't expect a LLM to do well. If your codebase is well structured, clear, and follows patterns systematically the of course a glorified pattern matching service will do far better in outputting acceptable results.
There is a reason why one of the most basic vibecoding guidelines is to include a prompt cycle to clean up and refactor code between introducing new features. LLMs fare much better when the project in their context is in line with their training. If you refactor your project to align it with what a LLM is trained to handle, it will do much better when prompted to fill in the gaps. This goes way beyond being "heavily borrowed".
I don't expect your average developer struggling with LLMs to acknowledge this fact, because then they would need to explain why their work is unintelligible to a system trained on vast volumes of code. Garbage in, garbage out. But who exactly created all the garbage going in?
I suspect it comes down to how novel the code you are writing is and how tolerant of bugs you are.
People who use it to create a proof of concept of something that is in the LLM training set will have a wildly different experience to somebody writing novel production code.
Even there the people who rave the most rave about how well it does boilerplate.
> When the system design is not pre-defined or rigid like
Why would a LLM be any worse building from language fundamentals (which it knows, in ~every language)? Given how new this paradigm is the far more obvious and likely explanation seems to be: LLM powered coding requires somewhat different skills and strategies. The success of each user heavily depends on their learning rate.
I think there are still lots of code “artisans” who are completely dogmatic about what code should look like, once the tunnel vision goes and you realise the code just enables the business it all of a sudden becomes a velocity God send.
Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?
Your words predict an explosion of unimaginary magnitude for new code and for new buisnesses. Where is it? Nowhere.
Edit: And dont start about how you vibed a SaaS service, show income numbers from paying customers (not buyouts)
The author of the library (kentonv) comments in the HN thread that he said it took him a few days to write the library with AI help, while he thinks it would have taken weeks or months to write manually.
Also, while it may be technically true we're "two years in", I don't think this is a fair assessment. I've been trying AI tools for a while, and the first time I felt "OK, now this is really starting to enhance my velocity" was with the release of Claude 4 in May of this year.
But that example is of writing a green field library that deals with an extremely well documented spec. While impressive, this isn’t what 99% of software engineering is. I’m generally a believer/user but this is a poor example to point at and say “look, gains”.
I have insight into enough code bases to know its a non zero number. Your logic is bizarre, if you’ve never seen a kangaroo would you just believe they don’t exist?
Show us the numbers, stop wasting our time. NUMBERS.
Also, why would I ever believe kangaroos exist if I haven't seen any evidence of them? this is a fallacy. You are portraying the healthy skepticism as stupid because you already know kangaroos exist.
What numbers? It doesn’t matter if it’s one or a million, it’s had a positive impact on the velocity of a non zero number of projects.
You wrote:
> Two years in and we are waiting to see all you people (who are free of our tunnel vision) fly high with your velocity. I don't see anyone, am I doing something wrong?
Yes is the answer. I could probably put it in front of your face and you’d reject it. You do you. All the best.
Perhaps I'm misreading the person to whom you're replying, but usefullness, while subjective, isn't typically based on one person's opinion. If enough people agree on the usefullness of something, we as a collective call it "useful".
Perhaps we take the example of a blender. There's enough need to blend/puree/chop food-like-items, that a large group of people agree on the usefullness of a blender. A salad-shooter, while a novel idea, might not be seen as "useful".
Creating software that most folks wouldn't find useful still might be considered "neat" or "cool". But it may not be adding anything to the industry. The fact that someone shipped something quickly doesn't make it any better.
Ultimately, or at least in this discussion, we should decouple the software’s end use from the question of whether it satisfies the creator’s requirements and vision in a safe and robust way. How you get there and what happens after are two different problems.
It's not for nothing. When a profitable product can be created in a fraction of the time and effort previously required, the tool to create it will attract scammers and grifters like bees to honey. It doesn't matter if the "business" around it fails, if a new one can be created quickly and cheaply.
This is the same idea behind brands with random letters selling garbage physical products, only applied to software.
The issue is not with how code looks. It's with what it does, and how it does it. You don't have to be an "artisan" to notice the issues moi2388 mentioned.
The actual difference is between people who care about the quality of the end result, and the experience of users of the software, and those who care about "shipping quickly" no matter the state of what they're producing.
This difference has always existed, but ML tools empower the latter group much more than the former. The inevitable outcome of this will be a stark decline of average software quality, and broad user dissatisfaction. While also making scammers and grifters much more productive, and their scams more lucrative.
There are very good reason that code should look a certain way and it comes from years of experience and the fact that code is written once but read and modified much more.
When the first bugs come up you see that the velocity was not god sent and you end up hiring one of the many "LLM code fixer" companies that are poping up like mushrooms.
No, they're not. It's critically important if you're part of an engineering team.
If everyone does their own thing, the codebase rapidly turns to mush and is unreadable.
And you need humans to be able to read it the moment the code actually matters and needs to stand up to adversaries. If you work with money or personal information, someone will want to steal that. Or you may have legal requirements you have to meet.
You’ve made a sweeping statement there, there are swathes of teams working in startups still trying to find product market fit. Focusing on quality in these situations is folly, but that’s not even the point. My point is you can ship quality to any standard using an llm, even your standards. If you can’t that’s a skill issue on your part.
> Counter fact: In our city district, which is the richest and biggest district of our 600k developed city, we decided to turn off the street lights at night on purpose to help with sleeping better.
To me that seems like a really alien solution. What about closing the curtains?
I was in the prep-meeting for that decision. We don't like curtains. We don't like wasting energy. We don't like light pollution. We prefer peaceful nights
Where I'm from moonlight on it's own will disrupt my sleep frequently enough that even if my neighbor did not forget his back light on, I would still sleep with my blockout curtains closed.
When it comes to driving, I would definitely prefer they keep the street lights on, for the increased visibility/safety.
>The rest of us are dealing with old code that is a hodgepodge of older standards and toolchains, that has to run in multiple environments, mostly old ones. It's like yeah, this C++26 feature will come in handy for me someday, but if that day comes then it will be in 2036, and I might not be writing C++ by then.
Things seem to be catching up. I had the same view up until recently, but now I'm able to use most of the C++23 features in an embedded platform (granted, some are still missing (limited to GCC 11.2).
> But, the convention in the US is that people see their houses as a form of savings.
And that is a big part of the problem. You cannot have it both ways, if housing is an investment, it will eventually lead to poorer outcomes for anybody that needs a house and does not inherit one.
Though I much prefer this solution, the GP solution is better when there are non-text documents in the directory tree. Find is nice and that you can narrow it down by file name or file extension, without relying on bash globs.
A couple of years ago I could not get anything OpenGL working over ssh, no matter how hard I tried. Ever since I just accepted that as fact. But I tested it now and it just works!
hasn't this always been the case? I thought one of the whole reasons for the weird design of OpenGL API was that it was a client-server sort of thing to allow for network transparency
I was doing this circa 2005 with an OpenGL program running on a Linux box in a server closet and a Windows machine running on my desk running some X11 server. Before I did that, I did research into the remote-draw capability of both X11 and OpenGL and came to the conclusion that what I eventually ended up doing would work just fine.