Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remember picking up this sort of advice from a professor way back in college. It's a godsend. Structure the problem as data flowing between tasks and connect them up with queues, avoid sharing state. It's just a better way to deal with multithreading no matter what language you use.


There is a time any place for sharing state and data. However it is extremely complex to make that work, and so if at all possible don't. In general the only time I can't use queues is when I'm writing the queue implementation (I've done this several times - turns out there are a number of different special cases in my embedded system where it was worth it to avoid some obscure downside to the queues I already had).

When you need the absolute best performance sharing state is sometimes better - but you need a deep understanding of how your CPUs share state. A mutex or atomic write operation is almost always needed (the exceptions are really weird), and those will kill performance so you better spend a lot of time minimizing where you have them.


I like this too.

I would also suggest looking into ringbuffers and LMAX Disruptor pattern.

There is also Red Planet Lab's Rama, which takes the data flow idea and uses it to scale.


Async is not a solution for data parallelism.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: