Hacker Newsnew | past | comments | ask | show | jobs | submit | smittywerben's commentslogin

HN always sides with web operators. There's probably a vc joke in there.


I don't believe in the "never roll your own encryption" it's literally giving up. Does it make economic sense, or is it just for a hobby? That's more debatable. It's also like a foil of 'don't use regex to parse html' or whatever, where the thread gets closed for comments.

The filesystem is so deeply connected to the OS I bet there's a lot of horror around swapping those interfaces. On the contrary, I've never heard anything bad about DragonflyBSD's HAMMER. But it's basically assumed you're using DragonFlyBSD.

Would I keep a company's database on a new filesystem? No, nobody would know how to recover it from failed disk hardware.

This isn't really my area but a Rust OS using a ZFS-like filesystem seems like a lot of classic Linux maintainer triggers. What a funny little project this is. It's the first I've heard of Redox.

Edit: reminds me of The Tarpit chapter from the Mythical Man Month

> The fiercer the struggle, the more entangling the tar, and no beast is so strong or so skillful but that he ultimately sinks.


The "never create your own encryption" advice is specifically because crypto is full of subtle ways to get it wrong, which you will NOT catch on your own. It's a special case of "never use encryption that hasn't been poked at for years by hundreds of crypto specialists" — because any encryption you create yourself would fail that test.

Filesystems, as complex as they are, aren't full of traps like encryption is. Still plenty of subtle traps, don't get me wrong: you have to be prepared for all kinds of edge cases like the power failing at exactly the wrong moment, hardware going flaky and yet you have to somehow retrieve the data since it's probably the only copy of someone's TPS report, that sort of thing. But at least you don't have millions of highly-motivated people deliberately trying to break your filesystem, the way you would if you rolled your own encryption.


That matches what I've heard, so I think you stated the trope perfectly. Your response is a good point about the actual difficulty. Perhaps I'm confused about what 'rolling your own encryption' means at an abstraction level. I just think it's weird that it comes up in an OS thread. Anyone who is serious about encryption is serious about the encryption hardware. At a higher level, WolfSSL limits the ciphers to a small, modern suite, which reduces the attack surface. Replacing OpenSSL is a fool's errand, I think; it's clearly the perfect implementation of OpenSSL, and it's a perfect security scapegoat. However, this is still about the x86 OS topic. Perhaps it's some TPM politics, similar to the decade-old stigma surrounding ZFS. Maybe I'm just questioning the limits of the x86 platform on any new operating system. Anyway, thanks for the response.


> I just think it's weird that it comes up in an OS thread

The only connection is that writing custom encryption is a thing that smart people like to try their hand at, but its success is defined by the long tail of failure cases not by the cleverness of the happy path. I agree 100% with what rmunn said.

As I said I'm not a filesystem person, but my sense is that filesystem difficulty is also dominated by the long tail of failure cases and for similar reasons. Failure in encryption means you lose control of your data, failure in filesystems mean you lose your data (or maybe you lose liveness/performance) [0]

But really I just meant it in the sense that it's a journey people often go down underestimating just how long it takes. So it's a sort of trap from the project management perspective.

> I'm confused about what 'rolling your own encryption' means at an abstraction level

It cuts through many abstractions. You should definitely not define your own crypto primitives. You also shouldn't define your own login flow. You shouldn't design a custom JWT system, etc. You probably shouldn't write your own crypto library unless there's not one in your language, in which case you should probably be wrapping a trusted library in C or C++, etc. The higher you go in abstraction, the more it's okay to design an alternative. But any abstraction can introduce a weakness so the risk is always there.

[0] Ordinarily you still have backups, which makes file system failures potentially less final than encryption failures. But what if the filesystem holding your backup root keys fails. Then the encryption wasn't a failure but you've potentially crypto shredded your entire infrastructure.


> "The higher you go in abstraction, the more it's okay to design an alternative."

What choice of paint you throw into the tarpit makes zero difference.


When this happened to me it was because, I can only guess, it was the Gemini servers were overloaded. Symptoms: Gemini model, Opaque API wrapper error, truncated responses. To be fair the Anthropic servers are overloaded a lot too but they have a clear error. I gave Gemini a few days on the bench and it fixed itself without any client side changes. YMMV.


Ah, the ol' Dropbox risk management tactic where they show you a random selection of your photos when you open the page. Or any page on the site. Suggested: "Remembering Summer Vacation 2020". By the way, do you want to compress your whole photo library to achieve Instagram quality while offering to consume more of the photos of your computer, disillusioned by the last few pennies of value that already fell. What's that? Your iCloud or Android device is out of space because the two ProRes videos your iPhone took after the commercial convinced the Apple user to engage the Apple proprietary video encoding button to maximize their Instagram engagement. The Samsung folds itself into a rolly-polly bug shell form. Eventually, all of your photos will be sent to Instagram, the final destination. Once there, after compressing your photos without asking, they will insist on your choosing ZSTD as the coffin.

So, on the consent-quality-useful triangle (WIP), Google is clearly eliminating quality and consent to provide you with a useful interface to the Google consentless compression box. Just what everyone wanted. The future is now.

Notification: You have 2 new views (details button: 2 ad-consenting views, 0 other views) on the photo you took of the compression artifact on a video that you suspect Google might have accidentally compressed without your consent, confusing itself to be Instagram. Unfortunately, your comparison photo gets equally confused and is compressed to be equally as bad as the compressed one. Now the photos look identical, and you look like a conspiracy theorist tweeting about "video encoding" from your Sesame Street Elmo phone, just like everyone else, with no issue at all. "We're in the Ourobouros. Maybe Paramount isn't the issue. Maybe it's Paramount Plus." The Samsung rolly-polly bug interrupts and insists this issue will have to wait because it's 2pm on Friday. Now, your Elmo phone is now the only device still working in the office, as you try to convince your wife why you have to stay late, "Because you're different than the rest of the people posting compression artifact-laden photos."


I avoided KDE after first experiencing several bad dates with Gnome. Skipped straight to xfce or a tiling wm. Years later, decided to try KDE again because someone made an arch linux joke about it. I don't remember the joke, but it screamed "I use arch btw". That's when I realized KDE and I had something going on.

In fact, my Gnome-fearing worldview was reinforced just last month by my construction of samba/s3/sftp windows NTFS-LFS FUSE netshare vpn on my Proxmox server to solve this issue of multiple desktop environments for the last and final time. Compatibility with everything? No issue.

I achieved a monumental 2kb/s transfer speed, slower than the modem speeds I experienced in my childhood on dial-up. My 2kb/s supercomputer environment was remarkably consistent across all protocols. Thanks to the Gnome community, I was glad to hear that the speeds I was getting were apparently a major improvement since the last release.

Surprisingly, nobody has provided me with any file access architecture memes from the thriving Arch Linux PDP-11 community. Needless to say, having the choice of a desktop environment is great. And KDE is just happy I showed up with a cool ride.

edit: less neg


There's olympics for persons with disabilities. TikTok is in one of those but with other similarly sized companies. News at 11: 1080p today looks worse than 5 years ago, ending a a two decade streak of innovation and improvement to the world's telecom system.


Google claimed Protobuffers are the solution but Google's planetary engineers clearly have ZERO respect for the mixed-endian remote systems keeping the galactic federation afloat with their cheap CORBA knockoff. It's like, sure which Plan 9 mainframe do you want to connect to like we all live on planet Google. Like hello???


This might sound crazy, but what if we're hitting some sort of hardware limitation, like too many people are sharing a single phone line, and Slashdot is innovative, but is like near best case solution when we're still sharing the same phone line? It's hard to explain what I'm trying to say.

Like when your mom picks up the phone and it kicks you off the dial-up internet. Except these days, it's like 4 pancakes of getting kicked off since Cloudflare entered in the scene, 5 pancakes if you're in the EU, and sure, let's throw in Anubis the catgirl just to be extra safe with the computers.


Dare me to say costless leaky abstraction. Then I'll point to the thread next door using Chrome profilers to diagnose Chrome internals using Scratch. Then I'll finish saying that at least Unreal has that authentic '90s feel to it.


NEW QUEST: "These New Gaming Requirements Are Unreal"

OBJECTIVE: Any project that demands HDRP and Nanometric Mesh

BONUS: Find the happy path


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: