What's reasonable is: "Set reserved fields to 0 when writing and ignore them when reading." (I heard that was the original example). Or "Ignore unknown JSON keys" as a modern equivalent.
What's harmful is: Accept an ill defined superset of the valid syntax and interpret it in undocumented ways.
Funny I never read the original example. And in my book, it is harmful, and even worse in JSON, since it's the best way to have a typo somewhere go unnoticed for a long time.
The original example is very common in ISAs at least. Both ARMv8 and RISC-V (likely others too but I don't have as much experience with them) have the idea of requiring software to treat reserved bits as if they were zero for both reading and writing. ARMv8 calls this RES0 and an hardware implementation is constrained to either being write ignore for the field (eg read is hardwired to zero) or returning the last successful write.
This is useful as it allows the ISA to remain compatible with code which is unaware of future extensions which define new functionality for these bits so long as the zero value means "keep the old behavior". For example, a system register may have an EnableNewFeature bit, and older software will end up just writing zero to that field (which preserves the old functionality). This avoids needing to define a new system register for every new feature.
Good modern protocols will explicitly define extension points, so 'ingoring unknown JSON keys' is in-spec rather than assumed that an implementer will do.
I disagree. I find accepting extra random bytes in places to be just as harmful. I prefer APIs that push back and tell me what I did wrong when I mess up.
Very much so. A better law would be conservative in both sending and accepting, as it turns out that if you are liberal in what you accept, senders will choose to disobey Postel's law and be liberal in what they send, too.
It's an oscillation. It goes in cycles. Things formalize upward until you've reinvented XML, SOAP and WSDLs; then a new younger generation comes in and says "all that stuff is boring and tedious, here's this generation's version of duck typing", followed by another ten years of tacking strong types onto that.
MCP seems to be a new round of the cycle beginning again.
The modern view seems to be you should just immediately abort if the spec isn't being complied with since it's possibly someone trying to exploit the system with malformed data.
"Warnings" are like the most difficult thing to 'send' though. If an app or service doesn't outright fail, warnings can be ignored. Even if not ignored... how do you properly inform? A compiler can spit out warnings to your terminal, sure. Test-runners can log warnings. An RPC service? There's no standard I'm aware of. And DNS! Probably even worse. "Yeah, your RRs are out of order but I sorted them for you." where would you put that?
That sounds awful and will send administrators on a wild goose chase throughout their stack to find the issue without many clues except this thing is failing at seemingly random times. (I myself would suspect something related to network connectivity, maybe requests are timing out? This idea would lead me in the completely wrong direction.)
It also does not give any way to actually see a warning message, where would we even put it? I know for a fact that if my glibc DNS resolver started spitting out errors into /var/log/god_knows_what I would take days to find it, at best the resolver could return some kind of errno with perror giving us a message like "The DNS response has not been correctly formatted", and then hope that the message is caught and forwarded through whatever is wrapping the C library, hopefully into our stderr. And there's so many ways even that could fail.
So we arrive at the logical conclusion: You send errors in morse code, encoded as seconds/minutes of failures/successes. Any reasonable person would be able to recognize morse when seeing the patterns on a observability graph.
Start with milliseconds, move on to seconds and so on as the unwanted behavior continues.
The Python 3 community was famously divided on that matter, wrt Python 3. Now that it is over, most people on the "accept liberally" side of the fence have jumped sides.
Is it just my connection or is the huggingface downloader completely broken? It was saturating my internet connection without making any progress whatsoever.
Ideas are cheap for a very narrow vision of "ideas". Sure, you can build your recipe site, TODO list or whatever it is cheaply and quickly without a single thought, but LLMs are still just assembling lots of open-source libraries _mostly_ written by humans into giant piles of spaghetti.
There's a hilarious thread on Twitter where someone "built a browser" using an LLM feedback loop and it just pasted together a bunch of Servo components, some random other libraries and tens of thousands of spaghetti glue to make something that can render a webpage in a few seconds to a minute.
This will eventually get better once they learn how to _actually_ think and reason like us - and I don't believe by any means that they do - but I still think that's a few years out. We're still at what is clearly a strongly-directed random search stage.
The industry is going through a mass psychosis event right now thinking that things are ready for AI loops to just write everything, when the only real way for them to accomplish anything is by just burning tokens over and over until they finally stumble across something that works.
I'm not arguing that it won't ever happen. I think the true endgame of this work is that we'll have personal agents that just do stuff for us, and the vast majority of the value of the entire software industry will collapse as we all return to writing code as a fun little hobby, like those folks who spend hours making bespoke furniture. I, for one, look forward to this.
The "built a browser" example you gave reminded me how I've "built a browser" as a kid in the 90s using Visual Basic (or something similar) - I've simply dragged the browser view widget, added an input and some buttons that called functions from the widget and there you go, another browser ready :-)
I agree with your vision of endgame. We wouldn't even need a screen, we will communicate verbally or with signs with our agents with some device that will have a long battery life and will always be on.
I just hope that we retain some version of autonomy and privacy because no one wants the tech giants listening in every single word you utter because your agent heard it. No-one wants it but some, not many, care.
Dilbert was pretty influential for me in the 90s and early 2000s. I enjoyed those comics a bunch while I was kid. He seemed to struggle a bit with his fame, and apparently his divorce caused him a pretty big psychic trauma, which was unfortunate.
His later personality was.. not my style.. and I dumped all of his books into little free libraries a few years back. The only things I really found interesting from his later work was focusing on systems rather than process.
Can't deny the early influence, though. The pointy-haired boss will live on forever.
Huh, literally setting this up today but using registry.terraform.io/baladithyab/truenas. It works for a few things, but I wasn't able to get user management going.
The other providers are built on the OpenAPI APIs which appear to be brittle and incomplete. I guess this is the websocket version in command-line form?
I'd definitely be interested in switching over if you can flesh this out!
Yep I spent a lot of time trawling through the midclt and webui source code to understand how to work with it. Are you using this in a professional context or homelab only?
All of the existing terraform providers for TrueNAS can't create apps with the latest version that uses docker, not kubernetes.
Please tell me what you need by creating issues on the repo
If whoever wrote this wants to add an authentic (and somewhat period correct) terminal front-end, I wrote a VT420 hardware emulator that works in the browser and we can wire them together!
More like the VT-05. The VT-52 came a few years later. But yeah, the VT-420 is way later.
Fun fact: The VT-52 didn't have a loudspeaker for the bell sound. Instead, it had a electromechanical relay which was set up to self-oscillate.
"Typing a character produced a noise by activating a relay. The relay was also used as a buzzer to sound the bell character, producing a sound that "has been compared to the sound of a '52 Chevy stripping its gears."
I used Miri for some key deno libraries and spent a fair bit of time cleaning up the violations. Many of them were real unsoundness bugs due to reference aliasing.
Unsafe code absolutely needs Miri if the code paths are testable. If not all code is Miri-compatible, it's worth restructuring it so you can Miri test as much as possible.
Note that Miri, Valgrid and the LLVM sanitizers all compliment each other and it's really worth adding all of them to a project if you can.
I did a huge chunk of work to split deno_core from deno a few years back and TBH I don't blame you from moving to raw rusty_v8. There was a _lot_ of legacy code in deno_core that was challenging to remove because touching a lot of the code would break random downstream tests in deno constantly.
We maintained it until we introduced bindings — at that point, we wanted more fine-grained control over the runtime internals, so we moved to raw rusty_v8 to iterate faster. We'll probably circle back and add the missing pieces to the deno runtime at some point.
I've been trying to do something similar to set up Windows VMs with developer tools. This would be awesome if there was a way to inject a `ps1` script where we could go through the awkwardness of installing choco and various dev tools.
For anyone interested, the magic incantation in the autoattend.xml is:
Redirecting to COM1 is a fun hack I discovered that allows you to remotely monitor these from build scripts.
Even better would be figuring out how to slipstream the choco packages into the ISO - it's not super reliable to install these packages in my recent experience.
reply