The problems I have with postgresql in a web backend context are:
Limited support for master/master replication (for availability); there are various approaches but none of them is really first-class in terms of driver support etc.
Limited support for async (non-blocking) connection in a lot of languages. Again, it's often an option but not really first-class.
Limited support for working with non-square-table data (even just simple things like having a "column" that holds a collection). Again supported by the server but often not really first-class in the driver etc.
Rarely clear what the performance behaviour of a given query is going to be - you can write a query that looks exactly like an indexed join (fast) but it will silently decay into a table scan (very slow).
Performance is slowed by mandatory transaction functionality that hardly ever makes sense in the context of a web app. You have to have a lot of indexes (otherwise you'll get aforementioned silent slow queries) but Postgresql will insist on updating all of those indexes before it will ever acknowledge a write. The overwhelming majority of the time, what you really want in a web app context is a "deferred index", but Postgresql doesn't have those; your only options are to wait for all index updates to happen or to do a fire-and-forget write without getting an acknowledgement at all.
The design encourages you to run reporting queries/aggregations without any separation from your "live" operational updates. Because of the aforementioned mandatory transaction isolation that you're almost certainly not actually getting any value out of, it's very easy for those reporting queries to block, deadlock, or slow down your live operations. The tooling around tracking and restricting slow queries is very limited, and because of the aforementioned unclear performance it's not always obvious which queries are going to be slow ahead of time.
Aggregation has to be done in a horrible pseudo-English query language where it's impossible to compose more than one query together. Also, even simple key lookups use this language, meaning a lot of time wasted in parsing (I once saw profiling results that something like 3/4 of the time taken to process a simple pkey fetch was spent parsing the query).
A lot of serious engineering effort goes into these systems, but most of it is wasted in a web backend context. (Indeed for a long time the most popular RDBMS for web backends was MySQL, despite it (at the time) having very little support for all the supposedly-important RDBMS features like ACID). If the RDBMS was invented today I don't think anyone would use it for a web backend - it just makes the wrong tradeoffs for that context.
Limited support for master/master replication (for availability); there are various approaches but none of them is really first-class in terms of driver support etc.
Limited support for async (non-blocking) connection in a lot of languages. Again, it's often an option but not really first-class.
Limited support for working with non-square-table data (even just simple things like having a "column" that holds a collection). Again supported by the server but often not really first-class in the driver etc.
Rarely clear what the performance behaviour of a given query is going to be - you can write a query that looks exactly like an indexed join (fast) but it will silently decay into a table scan (very slow).
Performance is slowed by mandatory transaction functionality that hardly ever makes sense in the context of a web app. You have to have a lot of indexes (otherwise you'll get aforementioned silent slow queries) but Postgresql will insist on updating all of those indexes before it will ever acknowledge a write. The overwhelming majority of the time, what you really want in a web app context is a "deferred index", but Postgresql doesn't have those; your only options are to wait for all index updates to happen or to do a fire-and-forget write without getting an acknowledgement at all.
The design encourages you to run reporting queries/aggregations without any separation from your "live" operational updates. Because of the aforementioned mandatory transaction isolation that you're almost certainly not actually getting any value out of, it's very easy for those reporting queries to block, deadlock, or slow down your live operations. The tooling around tracking and restricting slow queries is very limited, and because of the aforementioned unclear performance it's not always obvious which queries are going to be slow ahead of time.
Aggregation has to be done in a horrible pseudo-English query language where it's impossible to compose more than one query together. Also, even simple key lookups use this language, meaning a lot of time wasted in parsing (I once saw profiling results that something like 3/4 of the time taken to process a simple pkey fetch was spent parsing the query).
A lot of serious engineering effort goes into these systems, but most of it is wasted in a web backend context. (Indeed for a long time the most popular RDBMS for web backends was MySQL, despite it (at the time) having very little support for all the supposedly-important RDBMS features like ACID). If the RDBMS was invented today I don't think anyone would use it for a web backend - it just makes the wrong tradeoffs for that context.