Hacker News new | past | comments | ask | show | jobs | submit | more GauntletWizard's comments login

This turned out to be entirely the right approach, though, and it was probably pretty obvious even at the time. Sound Cards with built in mixers have all but died out. Everything they did has been eaten by software,

Even at the time, few games used an API where they managed multiple channels directly; Software mixing was commonplace from the 90s. Any game that wanted to play battle sounds was not relying on the mere 6-8 channels that cards from that time could handle.

Our modern Pipewire based workflow is remarkably simple and remarkably effective, and it's significantly an evolution of PA.


I find it indicative of the quality of these complaints that sound cards with mixers were brought up at all. As if that's a good reason to hate PA.


No, I hate PA cause it didn't work properly to the end. (Pipewire was better day 1 than PA ever was.) I just think that "you absolutely need PA to have multiple apps playing sound" was always nonsense, and the same sort of nonsense that was used to push systemd.


Yeah, that was total nonsense. Good cards existed. And if you didn't have a good card, alsa had a soft mixer. FreeBSD added a softmixer to OSS, too, so you didn't even need alsa. Worst case, you could run the Enlightenment sound daemon without Enlightenment and it was compact and just worked (as long as you had a simple sound setup)


I'm always happy to discuss sound cards with mixers, though! As a supporter of the Bloop Museum[1], I think that the "What might have been" if we had dedicated hardware for playing dozens or hundreds of sound files at a time is an interesting question. There's a lot of experimentation in the audio space that has kind of died out, because audio is so cheap - While over in in graphics, we're still seeing interesting advancements and dead ends.

[1] https://oldbytes.space/@bloopmuseum


SEEKING WORK | Seattle | Remote OK

I am a Site Reliability Engineer (SRE), Google Style, with experience at both large and small organizations. I can help you build a Platform Engineering practice from the very beginning. I'm looking to help small dev teams increase their velocity by implementing best-practices of Devops: CI/CD, Kubernetes Deployments, and effective Monitoring frameworks.

My resume: https://resume.gauntletwizard.net/ThomasHahnResume.pdf

My LinkedIn: https://www.linkedin.com/in/thomas-hahn-3344ba3/

My Github: https://github.com/GauntletWizard


Do also check out "The Day the Earth blew up", a love letter to the classic cartoons that simultaneously feels like a featuring film, and a cartoon short. Not the best thing I've ever seen but I'd give it a solid eight out of 10.

I'm so glad Coyote is finally getting a release. It was top of my watchlist all through 2022-2023, the test screenings went over fantastic, and I've heard a lot of people involved in the movie saying that it turned out great.

The decision from WB to shelve it was just cruel; It was 100% paid for and the work to complete it was ongoing and completed, even after it was shelved. A deep flaw in our tax system that it could ever be better to trash completed work than to try to find a buyer.


When I had the brief displeasure of working on HDFS at Facebook, we took a series of customer meetings to figure out how to get our oldest customers to upgrade their clusters. I was in a meeting with the photos team about what their requirements were and what was blocking them from upgrading, and they were very frank - they asked if the upgrade preserved the internal struct types associated with blocks on the disc servers. They didn't actually use hdfs as a file system, they allocated 1 GB files with zero replication, then used deep reflection to find the extent that comprised them on the discful storage servers, then built their own archival backup file system on top of that. I was horrified. The some of the older hats on the team were less surprised, having had some inkling of what was going on, even though they clearly didn't understand the details. Others considered it tantamount to sacrilege.

I think about this a lot. What they had built was probably actually the best distributed file system within Facebook. It was similarly structured to unraid, and had good availability, durability, and space saving properties, but the approach to engineering was just so wrong headed in my opinion that I couldn't stomach it. Talking about it with other Java programmers within facebook, nobody seemed to mind. Final was just a hint after all.


That reminds me of a quote from some Perl documentation[1]:

> Perl does not enforce private and public parts of its modules as you may have been used to in other languages like C++, Ada, or Modula-17. Perl doesn't have an infatuation with enforced privacy. It would prefer that you stayed out of its living room because you weren't invited, not because it has a shotgun.

It's not exactly the same situation, but the point is, at the end of the day, you need to be able to rely on the people involved being willing to act reasonable. If you can't, then you're going to have problems.

---

[1] https://perldoc.perl.org/perlmodlib


This approach is surprisingly (or unsurprisingly) one of the most robust.


The atomic weight of Uranium is 238; the atomic weight of Oxygen is 16. By weight, Triuranium octoxide is ~84% Uranium. Even if you're only counting the uranium in the Triuranium octoxide, that's still 60+% of the total mass coming from Uranium Atoms. I'd take that purity any day.


purity invites gradation; 'pure' does not.


Whenever I get mad at GitHub Actions, I refer to it by it's true name: VisualSourceSafe Actions. Because that's what it is, and it shows. If you check out their Action Runner's source code[1], you'll find the VSS prefix all over, showing it's lineage.

[1] https://github.com/actions/runner/blob/6654f6b3ded8463331fb0...


I know they've fixed VSS ages ago, but for many years it was buggy af and would catastrophically lose data on automerges that it confidently made and was wrong.

I had a coworker who called it Visual Sorta-Safe which is just about the best parody name I've ever heard in my entire career.


Oh, C#. Nice!


You may want to wholesale import The Jargon File. Badly out of date at this point, but surprisingly relevant and of more than moderate value as a historical reference.


This is a pretty nice guide, though it misses some steps I'd consider important. If you're making a CA for internal use today, I would highly encourage you to use Name Constraints. Name Constraints allow you to specify that your CA will only be used to sign domains you pre commit to. This means you can add your internal CA to your system trust stores on all of your corporate systems and not worry about it being abused to MITM your employees connections to the wider internet. (If that is a feature you'd like to have, I would be happy to expound further on why that's a bad idea)

I'm giving a workshop in a few weeks at Bsides Seattle[1] about this - Pick up a Yubikey and come play with PKI with me.

[1]https://www.bsidesseattle.com/2025-schedule.html


> why that's a bad idea

Given that traffic inspection for user and service proxies rely on MITM traffic inspection for many forms of IPS/IDS beyond basic SNI signature detection - I'd love to hear more!

I'm not necessarily suggesting it should be mandatory - I remember the pain of introducing Zscaler about a decade ago and the sheer number of windows apps that simply broke, leaving a trail of complex PAC files - but not enough to warrant off the solution.

I would assume the half way house would be to leave Name Constraints off your offline CA, maintain (at least) one intermediary with constraints turned on for regular certificate lifecycle management for internal certs, and a dedicated intermediary that is only used to generate the MITM certs?


ZScaler is an absolute horror for a software developer also in charge of ops.


I found things got a lot better when the proper apis and terraform zpa/zia coverage happened

Still many foot guns, but I’ve much the same feelings for most of the tooling in the proxy/vpn space.


If the client actually supports the optional name constraint extension. Is it acceptabley widespread nowadays?


Yes, Chrome introduced support in mid 2023, and it's now well rolled out. Firefox has had support for longer.

https://issues.chromium.org/issues/40685439


Author here. I agree this is an important feature for a CA. I'll try to add it.


Just added it.


Saml is insecure by design. Others have said it better before me, such as https://joonas.fi/2021/08/saml-is-insecure-by-design/, but the quote I got from an old thread here was "Sign Bytes, not meanings".

Parser differentials are expected and even necessary. What you intend to get from a signed response is very meaningful. A dilemma in modern TLS is that sometimes you want to trust one internal CA; That's the easy path. Sometimes you want to accept a certificate from a partner's CA, and you've got multiple partners - and you can no longer examine just the end certificate, but the root of that chain is equally important in your decisions.

This is also why I recommend whenever possible against AWS Sig algorithms; V4 is theoretically secure, but they screwed it up twice - SigV1 and SigV3 were insecure by design, and yet somehow made it past design review and into the public.


These are dang short compared to Java's FooBarBazBeanFactoryFuncClassImpl. The point you may be responding to is that "Short variable names" are themselves contextual. If you're doing a simple loop:

  for i := range personlist {
    person := personlist[i]
    ...
  }
Is more readable than

  for personNum := range personlist {
    person := personlist[personNum]
    ...
  }
because it makes clear that the i is basically irrelevant. The latter isn't bad style if you're three loops deep, because i, j, k is a bit harder to keep track of which is which.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: