For fire and forget type jobs you can use lists instead of pub/sub: save a job to a list by a producer, pop it on the other end by a consumer and execute it. It's also very easy to scale, just start more producers and consumers.
We're currently using this technique, to process ~2M jobs per day, and we're just getting started. Redis needs very little memory for this, just a few mb.
Beware of the scale up challenges with Redis. Redis can only utilize a single core. If you do anything sophisticated that needs to be atomic then you can't scale out to multiple servers, and you can't scale up to multiple cores.
At least with Postgres you can scale up trivially. Postgres will efficiently take advantage of as many cores as you give it. For scale out you will need to move to a purpose built queuing solution.
The problem is that you can't atomically write to your other database and also put a message on a redis queue. So you'll either end up with db changes not conveyed to redis, or you'll have messages on redis not reflected by changes to the db.
For fire and forget type jobs you can use lists instead of pub/sub: save a job to a list by a producer, pop it on the other end by a consumer and execute it. It's also very easy to scale, just start more producers and consumers.
We're currently using this technique, to process ~2M jobs per day, and we're just getting started. Redis needs very little memory for this, just a few mb.
Redis also supports acid style transactions.