I think it's closer to "Celery is a great tool to get started, but as your product matures you'll end up falling back to a simple AMQP consumer stack":
- By default, task IDs are just pickled Python objects - if you want to change the location of a function, it might break;
- The whole process appears to be reloaded for each task - any heavy loading ends up being performed on each call;
- I couldn't find any "ops" documentation: how does it interact with RabbitMQ? How are deferred tasks implemented? What happens if a node crashes?
Although the API is nice, the product itself seems ill-suited for actual reliable, production use — at that point, it's sometimes easier to just deploy your own minimal API using the serialization format you've chosen to adopt (JSON, Protobuf, ASN.1, … ;)
I’ve found celery to be extremely useful and reliable at handling > 1MM daily async tasks with non deterministic latencies. The programming interface melds beautifully with Flask.
Celery has been pretty reliable when i have used it in the past. Admittedly I have newer done anything particularly fancy with it, just running some background tasks and sending out some emails.
It doesn’t solve a single problem that a set of bare queue consumers can’t resolve.