Hacker Newsnew | past | comments | ask | show | jobs | submit | maep's commentslogin

Neat, but it has problems following in-universe rules, i.e. fantasy novels.


It's cheap and beginner friendly and very easy to learn. There is an overwhelming amount of ESP32 boards, variants and software which is daunting when you're just getting into embedded development.


That must be the bigest scandal since Watergate-gate.


I used this back in the Vista times and it worked very well. Very similar to how people use WSL these days.


WSL1 is actually quite similar afaik. Unfortunately also development stalled here in favour of WSL2. I remember Co-Linux to be a a thing around 2005, but it never stuck with me as I was mostly happy with Cygwin until all the libuv and go (now rust) stuff popped up.


I don't think WSL1 development stalled in as much as Microsoft made the determination that it wasn't a viable path forward. I/O was piss slow and chasing syscalls <-> NT API calls probably wasn't very fun.

Microsoft already knew Hyper-V quite well so a VM made sense, they just had to put some automagic management around it.


If anything, WSL1 showed me that MS still has some programming chops lurking around (I mean, obviously, I know they have some truly amazing developers and their research talent is top notch), but that project was technically pretty cool with the pico-processes and syscall translation layer.

But the one thing they would never be able to overcome was CUDA and kernel modules. If you show people "Linux" and they can't build their software, then it might as well be hot garbage.

I use WSL2 daily and far more often than my actual Linux VMs. It's not the fastest, but it did solve a huge chunk of the problems with WSL1. No, it's not native, but I already have 3 monitors, a huge tower, and a Mac Mini M1 on my desk. I didn't need a native box at my fingertips (those go on a rack in the basement, lol).


Isn't WSL2 still slower than WSL1 for accessing Windows drives and networks? I use WSL1 because I don't have HyperV available on my work PC, but it's also convenient because I perform most of my work on Windows.


WSL2 uses a lower level, simpler API than HyperV which is a full-fledged hypervisor. That's why it is also available on Windows Home. If you can enable WSL1, you should also be able to use WSL2 unless the HW virtualization is completely blocked off. You need to run a command to change the default though.

WSL2 cross-OS I/O performance is lower than WSL1. Especially with the random access patterns and constant stat/access calls made by Linux-targetting programs. However that should be the rare option to take. Working on native ext4 FS of WSL2 is almost as fast as running native Linux. So you should really copy files in and work on them in WSL.


> Mutable global state is evil. Friends don’t let friends use mutable global state.

Throw away your CPU and RAM then.


There are certainly levels of the abstraction pyramid where mutable global state is unavoidable; however, it shouldn't be too difficult to get to a point where we have enough abstraction so that we don't need to worry about mutable global state for what we do.

And even if those abstractions can't be 100% effective, we'd go a long way to achieving the desirable results of getting rid of it, if we just develop the mindset of avoiding it if at all possible, excepting for very rare instances where it's needed as a last resort.


Your CPU has an MMU in order to (among other things) let the OS prevent mutable global state.


I can not possibly roll my eyes hard enough.

Go ahead and write lots of mutable global statics. But when your program crashes randomly and you need my help to debug and it is, once again, a global mutable then you have to perform a walk of shame.


And disks. And the cloud. Or basically, you know, computers.


Don't threaten me with a good time.


Ah yes, the cloud where we all happily share compute resources without any restrictions to avoid stomping on each others toes.


The universe, you mean.


I spent half my professional career doing listening tests (MUSHRA and P.800), specifically on test items like Tom's diner. 128 kbps mp3 ist fairly easy to pick out, especially if you can compare it to the original. Double the bitrate and it's a real challenge.

Modern codes like opus are much more efficient. At high bitrates they are fully transparent and anybody who claims to be able to hear a difference is full of shit. Put them in a controlled setting and they fail every time.


LAME @192kb/s VBR was transparent 20 years ago, that said FLAC is still a good choice because storage is cheap now and you don't want to have to deal with a copy-of-a-copy situation.

Some Young folk think that 24bit/192kHz is the one-true-form who would think a 16/44 FLAC is a lossy encode, and then there's the vinyl folks. (I like vinyl, but not for the fidelity).

Required reading: https://xiph.org/video/vid2.shtml


I've found 128 kbps opus to be the best quality to stream my music when I'm not home. It is very fast to encode on the fly, and outside the house I mostly listen to music with either Bluetooth headphones or sometimes in a car, so playing something like flac would be a waste of bandwidth.

Maybe I'm old, but I do not hear a difference between 128 kbps opus and flac. I mainly use flac because it is an excellent archival format and you can encode it to different formats easily.


Yea same with me. Unless you have a perfect setup this is good enough. Though if you want to do a little experiment, try Fatboy Slim's Kalifornia, the beginning is notorious for destroying transform based codecs.


I think these kind of blind listening tests are fundamentally flawed. For example in the graphics realm (games, video encoding, colour science, etc) all it takes is a momentary black screen between two comparison images to make it vastly more difficult to detect differences. Likewise side by side is also more difficult than swapping between two images instantly. Audio makes it impossible to do an instant swap, at best you’re getting the equivalent of a side-by-side comparison


If anything those tests make it easier to find subtle differences, which is good if transparency is the goal. I don't think that makes them fundamentally flawed. They are used throughout the industry, making results comparable.

Of course there are other ITU tests that work without hidden references, looping or even A/B comparison. They require a much bigger listener pool, are more expensive and take longer, thus used less often during development.


Maybe not fundamentally flawed but audio ABX testing is focused towards short term memory and opinion (especially in unskilled subjects) than I would like. I don't think there is any right answer to audio blind tests.

I'll trust actual validated limits of human perception such as 16/48 audio, 1~3dE colour, etc. And techniques used in video encoding like psnr, ssim, etc are also pretty well grounded in science. Also SINAD

But anything involving a human blindly comparing audio is into audiophile pseudoscience territory, no matter how large a cohort of people or how it is executed


I can assure you that audio codec testing is a through science. Tools such as PSNR, PEAQ or POLQA all have limitations and cannot fully replace a human listener. Those familiar with the topic are often vocal critics of audiophile bullshit.

No, this is nowhere near pseudoscience, psychoacoustics is an established field of science.


Audio does not make it impossible to do an instant swap. Any good ABX tool lets you switch between test/reference samples with zero delay. Hear for yourself:

https://abx.digitalfeed.net/list.html

(you can press A, B, or X on the keyboard for instant switching)


Ye that's what I meant when I said side-by-side. You can't use the same pattern matching that your eyes/brain do when an image instantly swaps, as audio is always temporally moving to be heard at all. Instantly swapping between two audio streams is no better than looking at two images side-by-side


I laughed when Reddit suddenly spent months claiming Spotify is garbage despite It using 320kbps Vorbis and that the other 3 streaming platforms would dethrone It. Despite the fact I doubt any of them would tell AAC or Vorbis at 160 ~ 192kbps from FLAC, Hell I doubt they even tell 192kbit/s VBR LAME from FLAC let alone the modern lossy codecs. lol


Shoutout to h0ffman, in my opinion the best contemporary junglist on olschool hardware. His 8-Bit Jungle music disk: https://www.youtube.com/watch?v=--J66FY7qro


Perhaps it's Gwynne Shotwell's doing. She seems to be one of the few people on this planet who can say "no" to Musk and not get bullied.


She is responsible for the success of a company that's becoming increasingly strategic to US interests. Nobody will mess with her, not even Elon.


"Inferior" is relative. The main focus of LC3 was, as the name suggests, complexity.

This is hearsay: Bluetooth SIG considered Opus but rejected it because it was computationally too expensive. This came out of the hearing aid group, where battery life and complexity are a major restriction.

So when you compare codecs in this space, the metric you want to look at is quality vs. CPU cycles. In that regard LC3 outperforms many contemporary codecs.

Regarding sound quality it's simply a matter of setting the appropriate bitrate. So if Opus is transparent at 150 kbps, and LC3 at 250 kbps thats totally acceptable if that gives you more battery life.


Regarding complexity, do you have any hard numbers? Can't find anything more than handwaving


I remember seeing published numbers based on instrumented code, but could not find it.

I did a quick test with the Google implementation (https://github.com/google/liblc3) which is about 2x faster than Opus. To be honest, I expected a bigger difference, though it's just a superfical test.

A few things that also might be of relevance why they picked one over the other:

  - suitability for DSPs
  - vendor buy-in
  - robustness
  - protocol/framing constraints
  - control


Thanks for checking it, appreciated

- Well, 2x is nothing to write home about.

- DSP-compatibility probably considered but never surfaced as a reason, so hard to guess investigation results. + Pricing and availability of said DSP modules

- Robustness - well, that's one of the primary features of opus, battle tested by WebRTC, WhatsApp etc. (including packet loss concealment (PLC), Bit-Rate Redundancy (LBRR) frames)

- Algorithmic delay for opus is low, much lower than older BT codecs, so that definitely wasn't a deal breaker

- Ability to make money out of standard is definitely important thing to have


If used in a small device like a hearing aid, a 2x factor can have a significant impact on battery life.

VoIP in general experiences full packet loss, meaing if a single bit flips the entire packet is dropped. For radio links like Bluetooth it's possible to deal with some bit flips without throwing the entire packat away.

Until 1.5 Opus PLC was in my opinion it's biggest weakness, compared to other speech codecs like G.711 or G.722. A high compression ratio causes bit flips to be much more destructive.

As for making moeny, Bluetooth codecs have no license fees.


> For radio links like Bluetooth it's possible to deal with some bit flips without throwing the entire packat away.

Opus was intentionally designed so that the most important bits are in the front of the packet, which can be better protected by your modulation scheme (or simple FEC on the first few bits). See slide 46 of https://people.xiph.org/~tterribe/pubs/lca2009/celt.pdf#page... for some early results on the position-dependence of quality loss due to bit errors.

It is obviously never going to be as robust as G.711, but it is not hopeless, either.


You can check out Google's version which I assume is bundled in Android: https://github.com/google/liblc3


I'm glad this was rejected. The author consideres violations of PEP8 to be broken code that doesn't generate exceptions.

Lots of people treat PEP8 as the word of god, but apparently have not bothered to read it. "Many projects have their own coding style guidelines. In the event of any conflicts, such project-specific guides take precedence for that project."


> I'm glad this was rejected.

It’s an April Fools joke from forever ago…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: