Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do in fact plow unthinkingly into using concurrency quite frequently, and it works out quite well. I just unthinkingly use techniques like "just throw it in a dedicated thread", "put each stage in a thread, move objects between threads via channels", and "Pin one thread each to N cores, distribute incoming events across threads" whenever they seem like they might make a thing good, and they keep working out really great pretty much every time.

If you're writing C, you're going to have a bad time, but we've built some really great tools with modern type systems that make it far easier to treat concurrency as a thing that you can just rely on being able to safely use.

When you're using a language with misuse-resistant core primitives, like structured concurrency, and like Rust's Ownership, Mutex, and Send/Sync traits, it really is a meaningfully different programming experience. You make small use of concurrency all the time, because you just know by default without investing any time at all to check that you haven't made any kind of dumb mistakes.

When you use concurrency all the time, and get instant feedback from the compiler describing the precise data dependency that would make your idea a dumb choice, you get a ton of real direct feedback to learn about how to use concurrency correctly, and what designs it's a good fit for.

I agree that concurrency isn't a replacement for checking for algorithmic improvements, profiling and tuning your memory access patterns, using probabilistic filters, caching, etc. But just like how you can just unthinkingly drop a probabilistic filter before hitting a DB, I think you can and should be able to just unthinkingly spread a bunch of work out across a bunch of cores. This should be a simple, obvious, normal thing that people do by default whenever they care at all about performance, and it can be with good safe tools.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: