Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can get pretty far with keeping sharing to an absolute minimum and when you do need to share data, slap a lock free ringbuffer between them to communicate. Pretty simple to get right.


You are right, but what you call "lock free" is not the same as many other things that are called "lock free", even if indeed a ringbuffer needs no locks, so this may be confusing for newbies.

I strongly dislike the term "lock free", which is really just a marketing term invented by people trying to promote the idea that some algorithms are better than those "lock-based", when in fact those "lock-free" algorithms were only choosing a different trade-off in performance, which can be better or worse, depending on the application.

Even worse is that after the term "lock free" has become fashionable, it has also been applied to unrelated algorithms, so now it has become ambiguous, so you cannot know for sure what is meant by it, unless more details are provided.

When accessing shared data structures, the accesses are most frequently done in one of three ways.

The first is to use mutual exclusion, when the shared data structure is accessed within critical sections and only one thread can execute the critical section at a given time. This method is usually called as lock-based access.

The second is to use optimistic access, when the shared data structure is accessed concurrently by many threads, but they are able to detect interference from the other concurrent accesses and they retry their accesses in such cases. This is what is most frequently referred as "lock free" access. Compared to mutual exclusion, this access method may be faster in the best cases, but it is much slower in the worst cases, so whether this is a good choice depends on the application.

The third method happens when it is possible to partition the shared resource between the threads that access it concurrently, so their concurrent accesses can proceed without fear of interference. This partitioning is usually possible for arrays and for buffers a.k.a. FIFO memories a.k.a. message queues (including one-to-one, many-to-one, one-to-many and many-to-many message queues).

So your "lock free ringbuffer" refers to the third method from above, which is very different from the "lock free" algorithms of the second kind from above.

Whenever concurrent access to partitioned shared resources is possible, it is much better than accesses with mutual exclusion or optimistic accesses, which require either waiting or retrying, both of which are wasting CPU time.

Therefore using correctly-implemented message queues or other kinds of shared buffers is usually the best method to achieve high levels of concurrency, in comparison with other kinds of shared data structures, because it avoids the bottlenecks caused by mutual exclusion or optimistic accesses.


FWIW, lock-free is an academic term and it is not really about performance.


The fact that it has been coined by some academics in their research papers is not in contradiction with the fact that it has been chosen exactly like any marketing term, to imply that something is better than it really is.

The alternative term "optimistic access" describes much better the essence of those algorithms, while "lock free" attempts to hide their nature and to make them look like something that is guaranteed to be better (so receiving money for researching them is justified), because locks are supposed to be bad.

"Lock free" and "wait free" have been buzzwords that have provided subjects for a huge number of research papers in the academia, most of which have been useless in practice, because the described algorithms have been frequently worse than the lock-based algorithms that they were supposed to replace.


I don't agree with your characterization of these algorithms as "worse".

They have a desirable property. If you needed a wait-free algorithm, and this is a wait-free algorithm it's not "worse" for you than an existing algorithm that isn't wait-free, regardless of whether it's slower, or more memory intensive or whatever. You needed wait-free and this is wait free.

Why is wait-free desirable? Well, unlike a lock-free algorithm, the wait-free algorithm makes progress in defined time for everybody and it might be that it's actually much worse if anybody is stalled than for the averages to be bad for example.

If you mean "fast on average" say that. If you mean (as often C++ programmer do) "Validates my feeling of self-worth" then say that. I don't know whether anybody wants to pay you more money to validate your self-worth, but at least you're being honest about your priorities.


Thanks.


Step 1: use concurrency on a very high level. For example, write an app, and then run 4 instances of it, each working on a quarter of your data.

Step 2: when you absolutely need concurrency within one app, try using some library that already solved the issue. For example, there are lots of databases with pretty strong concurrency contracts while still being efficient.

Step 3: if you absolutely need custom solution, use a library/language that provides reasonable tools out of the box and keep your logic to minimum.

Following the first two steps will solve 95% of your concurrency issues, if you include step 3 it goes to 99%.


Thanks anal reactor!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: