Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A single core small Redis server can do wonders.

For fire and forget type jobs you can use lists instead of pub/sub: save a job to a list by a producer, pop it on the other end by a consumer and execute it. It's also very easy to scale, just start more producers and consumers.

We're currently using this technique, to process ~2M jobs per day, and we're just getting started. Redis needs very little memory for this, just a few mb.

Redis also supports acid style transactions.



Beware of the scale up challenges with Redis. Redis can only utilize a single core. If you do anything sophisticated that needs to be atomic then you can't scale out to multiple servers, and you can't scale up to multiple cores.

At least with Postgres you can scale up trivially. Postgres will efficiently take advantage of as many cores as you give it. For scale out you will need to move to a purpose built queuing solution.


Good point. My assumption is that the first hit would be memory usage, way before core usage.

There are many options for scaling:

- vertically scale by adding more memory

- start redis instance on another port (takes 1mb) if decided to add more cores on the same vm

- separate data into another vm

- sharding comes out of the box, but that would be my last resort


The problem is that you can't atomically write to your other database and also put a message on a redis queue. So you'll either end up with db changes not conveyed to redis, or you'll have messages on redis not reflected by changes to the db.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: