Hacker News new | past | comments | ask | show | jobs | submit login

Embedded programming is still like this. Most people just don't inspect the assembly produced by their compiler. Unless you're working on an extremely mainstream chip with a bleeding edge compiler, your assembly is going to be absolutely full of complete nonsense.

For instance, if you aren't aware, AVR and most other uCs have special registers and instructions for pointers. Say you put a pointer to an array in Z. You can load the value at Z and increment or decrement the pointer as a single instruction in a single cycle.

GCC triples the cost of this operation with some extremely naive implementations.

Instead of doing 'LD Z+' GCC gives you ``` inc Z ld Z dec Z ```

Among other similar annoyances. You can carefully massage the C++ code to get better assembly, but that can take many hours of crazy-making debugging. Sometimes it's best to just write the damn assembly by hand.

In this same project, I had to implement Morton ordering on a 3D bit field (don't ask). The C implementation was well over 200 instructions but by utilizing CPU features GCC doesn't know about, my optimized assembly is under 30 instructions.

Modern sky-high abstracted languages are the source of brain rot, not compilers or IDEs in general. Most programmers are completely and utterly detached from the system they're programming. I can't see how one could ever make any meaningful progress or optimization without any understanding of what the CPU actually does.

And this is why I like embedded. It's very firmly grounded in physical reality. My code is only slightly abstracted away from the machine itself. If I can understand the code, I understand the machine.




And this is appropriate for your domain and the jobs you work on.

If your job was to build websites, this would drive you insane.

I think I'm coming around to a similar position on AI dev tools: it just matters what you're trying to do. If it's a well known problem that's been done before, by all means. Claude Code is the new Ruby on Rails.

But if I need to do some big boy engineering to actually solve new problems, it's time to break out Emacs and get to work.


The vast majority of time spent building software has little to do with optimization. Sky-high abstracted brain rot languages are useful precisely because usually you don’t need to worry about the type of details that you would if you were optimizing performance

And then if you are optimizing for performance, you can make an incredible amount of progress just fixing the crappy Java etc code before you need to drop down a layer of abstraction

Even hedge funds, which make money executing trades fractions of milliseconds quicker than others, use higher level languages and fix performance issues within those languages if needed


My assertion is that this is a very bad thing. This is why we have electron and pay Amazon $900/mo to host what should be a static website on a Pentium 4.

Since optimization is not a concern, waste is not a concern. Sure, send the user a 200MB JavaScript blob for the visit counter, who cares?

AT&T's billing website bounces you back and forth between four different domains where each downloads 50MB of scripts to redirect you to the next before landing on what looks like a Flash app from 2008. Takes five minutes to get my invoice because nothing matters. God alone knows how much it costs them to serve all of that garbage, probably millions.

This is a bad way to make software. It's not even a good way to make bad software.

The brain rot is the disconnect between the software and reality. The reality is that software has to run somewhere and it costs someone real money per unit resource. You can either make good software that consumes fewer resources or bad software that wastes everyone else's time and money. But hey, as long as you ship on time and get that promotion who cares, right?


As a long time embedded programmer, I don't understand this. Even 20 years ago, there is no way I really understood the machine, despite writing assembly and looking at compiler output.

10 years ago, running an arm core at 40 Mhz, I barely had the need to inspect my compiler's assembly. I still could roughly read things when I needed to (since embedded compilers tend to have bugs more regularly), but there's no way I could write assembly anymore. I had no qualms at the time using a massively inefficient library like arduino to try things out. If it works and the timing is correct, it works.

These days where I don't do embedded for work, I have no qualms writing my embedded projects in micropython. I want to build things, not micro optimize assembly.


> As a long time embedded programmer, I don't understand this

I think you both should define what your embedded systems look like. The range is vast after all. It ranges from 8 bit CPU [0] with a few dozen kilobytes of RAM to what almost is a full modern PC. Naturally, the incentives to program at a low level are very different across the embedded systems range.

[0] https://www.silabs.com/mcu/8-bit-microcontrollers/why-8-bit-...


I was trying to bit-bang 5 250KHz I2C channels on a 16MHz ATTiny while acting as an I2C slave on a 6th channel.

This is really not something you can do with normal methods, the CPU is just too slow and the assembly is too long. No high level language can do what I want because the compiler is too stupid. My inline assembly is simple and fast enough that I can get the bitrate I need.

In my view, there's two approaches to embedded development: programming á la mode with arduino and any unexamined libraries you find online, or the register hacker path.

There are people who throw down any code that compiles and moves on to the next thing without critical thought. The industry is overflowing with them. Then there are the people who read the datasheet and instruction set. The people painstakingly writing the drivers for I2C widgets instead of shoving magic strings into Wire.Write.

I enjoy micro-optimizing assembly. I find it personally satisfying and rewarding. I thoroughly examine and understand my work because no project is actually a throwaway. Every project I learn something new, and I have a massive library of tricks I can draw from in all kinds of crazy situations.

If you did sit down to thoroughly understand the assembly of your firmware projects, you'd likely be aghast at the quality of code you're blindly shipping.

All that aside for a moment, consider the real cost of letting your CPU run code 10x slower than it should. Your CPU runs 10x longer and consumes a proportional amount of energy. If you're building a battery powered widget, that can matter a lot. If your CPU is more efficient, you can afford a lighter battery or less cooling. You have to consider the system as a whole.

This attitude of "ship anything as quickly as possible" is very bad for the industry.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: