400ms added latency is really bad for user experience. Do a few queries and you’re going to need to add caching. Now you’re spending your precious developer time managing caching invalidation in lots of places instead of just setting up your database properly in the beginning.
I understand there are ways to deal with the problem of latency in serverless, but this is a problem I'd rather not deal with in the first place. The database IS the application, and I would not want to sacrifice speed of the database for anything. Serverless is totally not worth the trade-off for me: slightly more convenient deployments, for much higher latency to the database.
I'm a solo dev that has been installing and running my own database server with backups for decades and have never had a problem with it. It's so simple, and I have no idea why people are so allergic to managing their own server. 99% of apps can run very snappily on a single server, and the simplicity is a breath of fresh air.
That's why I'm working hard on bringing in a tightly integrated support for SQLite in the Elixir ecosystem (via a Rust FFI bridge): because in my professional experience not many applications need something as hardcore and amazing as PostgreSQL; at least 80% of all apps I ever witnessed would be just fine with an embedded database.
I share similar experiences like yours and others in this thread, and to me all those operational concerns grow into unnecessary noise that distracts from the real problems that we are paid to solve.
Not just cold start (another problem you have to worry about with serverless). There's the simple fact that network latency outside of the same datacenter is ALWAYS slow and randomly unpredictable, especially if you have to run multiple queries just to render a single page to your user. A database should always be over LAN in my opinion, if you need to access data over the internet, at that point it should be over an API/HTTP, not internal database access.