the Val Town team were kind enough to share this article with me before they released it. Perhaps you know from previous HN threads that we take customer feedback very seriously. Hearing feedback like this is hard. Clearly the team at Val Town wanted Supabase to be great and we didn’t meet their expectations. For me personally, that hurts. A few quick comments
1. Modifying the database in production: I’ve published a doc on Maturity Models[0]. Hopefully this makes it clear that developers should be using Migrations once their project is live (not using the Dashboard to modify their database live). It also highlights the options for managing dev/local environments. This is just a start. We’re building Preview Databases into the native workflow so that developers don’t need to think about this.
2. Designing for Supabase: Our goal is to make all of Postgres easy, not obligatory. I’ve added a paragraph[1] in the first page in our Docs highlighting that it’s not always a good idea to go all-in on Postgres. We’ll add examples to our docs with “traditional” approaches like Node + Supabase, Rails + Supabase, etc. There are a lot of companies using this approach already, but our docs are overly focused on “the Supabase way” of doing things. There shouldn’t be a reason to switch from Supabase to any other Postgres provider if you want “plain Postgres”.
3. That said, we also want to continue making “all of Postgres” easy to use. We’re committed to building an amazing CLI experience. Like any tech, we’re going to need a few iterations. W’re building tooling for debugging and observability. We have index advisors coming[2]. We recently added Open Telemetry to Logflare[3] and added logging for local development[4]. We’re making platform usage incredibly clear[5]. We aim to make your database indestructible - we care about resilience as much as experience and we’ll make sure we highlight that in future product announcements.
I’ll finish with something that I think we did well: migrating away from Supabase was easy for Val Town, because it’s just Postgres. This is one of our core principles, “everything is portable” (https://supabase.com/docs/guides/getting-started/architectur...). Portability forces us compete on experience. We aim to be the best Postgres hosting service in the world, and we’ll continue to focus on that goal even if we’re not there yet.
> I’ll finish with something that I think we did well: migrating away from Supabase was easy for Val Town, because it’s just Postgres
Y'all are saints in this space. Every other managed db provider does everything possible to make leaving as difficult as possible. Definitely gives me a lot more confidence using Supabase in the future
Appreciate this well-thought out response. As someone who has built several proof-of-concepts on Supabase (but never going far enough to test its limits), articles by Val Town here and responses like yours all work towards my analysis of the platform for future projects.
It's funny that threads like these bring up comments like "Well I use XYZ and it solves all of my problems." As if a one-time mention of a new PaaS is enough to bank on it for future projects. Although I can't lie - I do bookmark every PaaS that I see mentioned on HN.
Regardless, I'd much rather put my faith in a platform like SB that has been battle-tested in public, even if it doesn't work out perfectly every time.
Always glad to see you and the team showing up for the discussions and improving SB.
Not paradoxical at all. They're clearly interested in competing fairly instead of locking you in. That's a big advantage. They're also critically evaluating their approach. Exactly what I as a customer would want!
Let me just say that (for me) Supabase is one of the most exciting startups of the past couple years and I'm sure these issues will get ironed out eventually. I believe in your overall mission and am inspired by how much progress you all have made in just three years.
I feel like the issue with the Supabase dashboard and database modification is more one of your general approach. You put editing stuff all right up front when at best it should just be an emergency hatch, and the only place to find info on migrations is by going and looking around in the docs.
yes, I agree. We're working on ways to make the Migration system more prominent in the Dashboard. Preview Databases will help with this too.
> just be an emergency hatch
I would go as far as saying that migrations should still be used beyond the initial development. The Maturity Models linked above include 4 stages: Prototyping, Collaborating, Production, Enterprise. After "Prototyping", everything should be Migrations.
The exception is that you can use the Dashboard for local development. When you run "supabase start", you can access the Dashboard to edit your local database. From there you can run "supabase db diff" to convert your changes into a migration.
Hi, I've recently gave supabase a shot to as an alternative to firebase because I needed SQL. One thing that I've struggled with from the start is that supabase seems to ignore backends completely.
I don't want to use supabase edge functions, since I want to keep it simple with a single express backend and don't want to be vendor-locked.
Currently only a hobbyist but so far I really enjoy using Supabase and have appreciated the generous free tier. Maybe some day, if I'm lucky, I'll prioritize my projects further and pursue monetization. It would be a great personal development if I needed to graduate to the paid tier.
Just wanted to add I’ve been using the local development and migrations workflow and it has been fantastic. Honestly the only issue I’ve really had is how frustratingly difficult it is to change an id field from int8 to uuid if I mistakenly don’t choose the right one at first and the migrations get stuck on that and I have to resort to manual hacking. Setting up local/staging/prod with this new system seems really easy. Nice work!
thanks Max, I'm also a fan of ionic. fwiw, I usually start with uuids now that they're natively supported: `id uuid primary key default gen_random_uuid()`. Changing a PK/FK type on a database definitely isn't trivial.
Hey, I’d love to try out Supabase as a backend for Flutter apps. However, the docs and scaffold code, like authentication flows, are a bit lacking for Flutter. What are the plans for improving the Flutter-related documentation and packages? Are there any good first issues on GitHub related to Supabase + Flutter?
Echoing most of the comments here: Love your product! This feedback in the article isn't even a setback, just a learning opportunity that Supabase users know will only make it better. You guys/gals at Supabase are crushing it!
For me it's the opposite. You've got the CEO speaking corporate platitudes trying to defend himself in the comments by removing focus on the actual issues at hand. LLM helped summarize the issues at hand.
Even though it looks like a great product initially, it has a lot or errors and bugs when you are trying to actually build something more robust than a toy app.
Local development is a massive pain with random bugs.
The response time of the database also varies all over the place.
But the most important problem that we faced, was having so much of application logic in the database.
Row level security is their "foundational piece", but there is a reason why we moved away from database functions and application logic in database over a decade ago: that stuff in unmaintainable.
There is also really poor support and at the end of the day, the whole platform felt like a hack.
I think now, for most apps with up to 500_000 users (with 10_000 concurrent realtime connections) PocketBase is the best PaaS out there having tested a bunch of them.
A single deployable binary which PocketBase provides is a breath of fresh air.
Anything more than that, just directly being on top of bare metal or AWS / GCP is much better.
> Row level security is their "foundational piece", but there is a reason why we moved away from database functions and application logic in database over a decade ago: that stuff in unmaintainable.
Funny. In my experience, application-level authorization checks are very error-prone, easy to accidentally omit, and difficult to audit for correctness."Unmaintainable", I suppose.
Whereas RLS gives you an understandable authorization policy with a baseline assurance that you're not accidentally leaking records you shouldn't be.
RLS is great, but it's not that hard to shoot yourself in the foot or miss stuff. E.g.:
ALTER TABLE bookmarks ENABLE ROW LEVEL SECURITY;
CREATE POLICY bookmarks_owner ON bookmarks USING (owner_id = auth.uid());
CREATE VIEW recent_bookmarks AS SELECT * FROM bookmarks ORDER BY created_at DESC LIMIT 5;
The above may look fine at first glance, but recent_bookmarks actually bypasses RLS.
Indeed - one of the great changes in v15. (for any folks on previous versions, you need to change the view owner to a non-superuser role without the bypassrls attribute).
Thanks for all your work on PostgREST, Steve! Do you think we'll see relational inserts in the near future, or is that still a bit down the road?
I agree. I would love to see more articles on pocketbase. It's phenomenal and ganigeorgiev is an animal about responding to bugs and discussions. He's got to be a hybrid human and ChatGPT robot.
What's also really cool is that you can also just use PocketBase as a Go library and just build your app around it like any normal web framework, while still having a great UI for quick prototyping. And when you need more custom behaviour instead of database functions, you just write some Go code while still compiling everything down to a single binary that you can copy over.
You don't have to use the xml format if you don't want to. Not sure why you wouldn't want to, since the xsd enables autocompletion in your IDE which makes it the most practical format of all.
Personally, I had a really easy time getting Supabase to work locally. However, we use `dbmate` to manage our migrations instead of built-in Supabase migrations.
Also curious to hear from others on this:
> After a bit of sleuthing, it ended up that Supabase was taking a database backup that took the database fully offline every night, at midnight.
This seems like a terrible design decision if true. Why not just backup via physical or logical replication?
And totally hear the issues here with database resizing and vacuuming and other operations. That stuff is a big pain when it breaks.
To give context, Val Town have a particularly write-heavy setup, storing a lot of json strings. The nightly backups were causing write-contention, even at their relatively small size. We didn’t detect errors because they were application-level. We should have moved them to PITR as soon as they mentioned it since the timing was so obviously coinciding with backups. We’re investigating moving everyone to PITR (including the free tier). At the very least, we’ll add more control for backups - allowing users to change the maintenance window, or possibly disabling backups completely if they are managing it themselves.
How do people on HN like Row Level Security? Is it a better way to handle multi-tenant in a cloud SaaS app vs `WHERE` clauses in SQL? Worse? Nicer in theory but less maintainable in practice?
fwiw, Prisma has a guide on how to do RLS with it's client. While the original issue[0] remains open they have example code[1] with the client using client extensions[2]. I was going to try it out and see how it felt.
I use both for defence in depth. The SQL always includes the tenant ID, but I add RLS to ensure mistakes are not made. It can happen both ways: forget to include the tenant in the SQL, or disable RLS for the role used in some edge case. For multitenancy, I think it’s absolutely critical to have cross-tenancy tests with RLS disabled.
One of the things I think is important is to make the RLS query is super efficient - make the policy function STABLE and avoid database lookups, get the context from settings, etc.
RLS is pretty great as a backstop, but I found Supabase over-reliant on RLS for security, when other RBACs are available in regular PG. I can’t remember the details now.
I’ve found RLS is great with Postgraphile which uses a similar system to Supabase but is a bit more flexible.
I found RLS challenging to work with when I prototyped an app with it and postgraphile.
I had seemingly-simple authz rules that RLS made challenging to express. I needed some operations honor the user's row access privileges, but with different column SELECT/UPDATE privileges. E.g., a user can only change a value after the backend validates and processes the input, or they shouldn't be allowed to retrieve their password hash.
Expressivity was challenging, but was compounded by security being implicit. I couldn't look at any given spot in my code and confirm what data it's allowed to access - that depends on the privileges of the current DB connection. Once you mix in connections with cross-user privileges, that's a risky situation to try to secure.
The main issue we've had with it is that it's just plain slow for a lot of use cases, because Postgres will check the security for all rows before filtering on the joins, doing anything with WHERE clauses, doing anything to even tentatively take LIMIT into account, etc.
Imagine a 1-million-row table and a query with `WHERE x=y` that should result in about 100 rows. Postres will do RLS checks on the full 1 million rows before the WHERE clause is involved at all.
I'm having a hard time relating to this comment given our own experience.
We use RLS extensively with PostgREST implementing much of our API. It _absolutely_ uses WHERE clauses and those are evaluated / indexes consulted before RLS is applied. Anything else would be madness.
> because Postgres will check the security for all rows before filtering on the joins, doing anything with WHERE clauses, doing anything to even tentatively take LIMIT into account, etc.
Note that the above only happens for non-inlinable[1] functions used inside RLS policies.
Going from what you mentioned below, it seems your main problem are SECURITY DEFINER functions, which aren't inlinable.
It's possible to avoid using SECURITY DEFINER, but that's highly application-specific.
Try it with RLS policies that have any plain JOINs in them to reference other tables and you'll see execution times balloon massively (as in, orders of magnitude worse) for a lot of simple use cases, because it's then doing the RLS checks against every involved table to determine if your original RLS check is allowed to use them. The only way around that if you have multiple tables involved in determining access is to use cached subqueries with SECURITY DEFINER functions that aren't subject to the recursive RLS checking.
You can use that to inject your ACL/permissions into a setting - set_config('permissions', '{"allowed":true}'). Then in your RLS rules you can pluck them out - current_setting('permissions'::jsonb).
This should make your RLS faster than most other options, in theory, because of data co-location
That seems deeply impractical for a lot of cases. If user A has access to 80,000 of those 1,000,000 rows in a way that's determined from another table rather than as part of in-row metadata, doing the lookups to JSONify 80,000 UUIDs as an array to pass along like that really isn't going to help beyond cutting down a 20-second query response to a still-unacceptable 7-second query response [1] just to get 100 rows back.
[1]: Both numbers from our own testing, where the 7 seconds is the best we've been able to make it by using a SECURITY DEFINER function in a `this_thing_id IN (SELECT allowed_thing_ids())` style, which should have basically the same result in performance terms as separately doing the lookup with pre-fetching, because it's still checking the IN clause for 1,000,000 rows before doing anything else.
You certainly wouldn't want to inject 80K UUIDs. I'm not sure I understand the structure you're using but if you want to send me some details (email is in my profile) I'd like to dig into it
One of the tenants views a page that does a simple `SELECT * FROM products ORDER BY updated_at LIMIT 100`. The RLS checks have to reference `products` -> `tenants` -> `tenant_users`, but because of how Postgres does it, every row in products will be checked no matter what you do. (Putting a WHERE clause on the initial query to limit based on tenant or user is pointless, because it'll do the RLS checks before the WHERE clause is applied.) Joins in RLS policies are awful for performance, so your best bet is an IN clause with the cached subquery function, in which case it's still then got the overhead of getting the big blob of IDs and then checking it against every row in `products`.
Yes. That's also irrelevant to the cause of the performance issues, which all happen before the ORDER BY and LIMIT even come into the picture in Postgres' query optimization.
Edit: To give a better idea of the impact of RLS here, writing up an equivalent query outside of the RLS context [1] has an under-1-second response time, where RLS turns that into 10x the time even in the most optimized case.
[1]: This kind of thing, roughly:
SELECT *
FROM products
JOIN tenants ON products.tenant_id = tenants.id
JOIN tenants_users ON tenants.id = tenants_users.tenant_id
WHERE tenant_users.user_id = auth.uid()
ORDER BY updated_at
LIMIT 100
Hi - We're an analytics solution for a specific vertical, so this is probably not appropriate for everyone but - what we did was create partitioned data tables that are named using a hash of the user UUID and other context to create the partition table name upon provisioning data tables for the user. The parent table is never accessed directly. We're using Supabase, but we don't use Supabase's libraries to operate this.
It is highly appealing to have that defense in depth. However, when building a prototype or a product, not having experience in it causes me to worry that we will end up being stuck with a choice where it's very hard to pull ourselves out of.
So instead we've stuck to having that filtering logic in the application side. The main concern is how user auth/etc works in Postgres. (lack of knowledge, not lack of trust).
Because we also have complex filtering like, "let me see all the people in my team if I have this role, but if i'm a public user, only show this person" etc
The documentation section here applies to so many products I've battled in the past.
> The command supabase db remote commit is documented as "Commit Remote Changes As A New Migration". The command supabase functions new is documented as "Create A New Function Locally." The documentation page is beautiful, but the words in it just aren't finished.
Great documentation is such a force multiplier for a product. It's so worthwhile investing in this.
Don't make your most dedicated users (the ones who get as far as consulting your documentation) guess how to use your thing!
yeah, most golang/rust/API documentation in products seems to think that "the function name is documentation", which.... no it's not. that's a tooltip in an IDE, not a docs website.
I hadn’t touched SQL for almost 7 years, but dipped my toes back in to build a PoC using Supabase. Despite some initial pains around RLS, I’ve grown to love it.
Sure, Supabase has some awkward quirks and issue, and author has some good points. But when it works like it should, it’s pretty awesome. I think of it as a powerful wrapper around solid services that make for great DX, in _most_ cases.
If Supabase could provide a great way to handle migrations and RLS, that’d be the biggest improvement to most people’s workflows, I’d bet.
I really wish I could just define my scheme, tables, functions, triggers, policies etc as typescript, then have migrations generated from that.
Echo all the words from the author here, and kudos for being transparent.
I’ve faced exactly the same problems building my new product.
But, on the other hand, Supabase was incredibly easy to setup, and meant I could worry about infrastructure later.
Pros and cons like with everything, and always wise to understand the flaws of the tech you’re using.
The local development & database migration story is Supabase's biggest weakness. I hate having to do migrations live in prod. The admin dashboard is just so much better than any alternative Postgres tooling that it's been worth using despite that. Takes care of the stuff I'd normally be sweating over when writing migrations like nullable fields / FK constraints / JSON formatting for default fields. Would be great if Supabase allowed for a "speculative migration" in its UX where it spit out a file you could use locally to test beforehand.
Please don't use the Dashboard to edit your database in production. We're working on Preview Databases which will help enforce this. For now this fits into our Shared Responsibility Model:
You are responsible for a workflow that's suitable for your application. Once you get into production, you should be using Migrations for every database change. I have a more thorough response here: https://news.ycombinator.com/item?id=36006018
> Please don't use the Dashboard to edit your database in production.
You should make the default editor read only and allow switching to write mode with a big warning. This would discourage people from writing SQL or using UI to modify in production.
The dashboard has always screamed "use me to edit" and I have used supabase in the beginning and very recently too. Nothing has changed to discourage it so far.
Maybe something like mode button which is present at top and you can click to switch between development and production mode?
This would also change a couple more things which you do not want to touch in production by accident.
If you use the CLI, `supabase start` spins up a Docker instance built from all your migration .sql files [1].
If anything, I think the admin dashboard encouraging directly doing operations on the database is the biggest weakness of Supabase. I would much prefer being able to lock it down to purely CI-driven migrations.
A mid way could be self-hosting Supabase, whether you use more or less Supabase features.
I know self-hosting might be challenging, specially getting a production-ready Postgres backend for it.
That's why at StackGres we have built a Runbook [1] and companion blog post [2] to help you run Supabase on Kubernetes. All required components are fully open source, so you are more than welcome to try it and give feedback if you are looking into this alternative.
Nice read. I run 5-6 projects on Supabase currently. I have also run into the local development / migration obstacles. It's otherwise been pretty great for our needs
The CLI could use some love for sure. I think the migrations experience is also where I’ve felt the most pain. I will say, the CLI very heavily assumes that you are using the cloud product as the remote, which I guess is absolutely intentional (it’s a path to get users onto the product) but it was kind of annoying to figure that out halfway into a POC like I did. Don’t go in expecting you can point the CLI at some self hosted remote. It’s not possible without forking, making significant changes and rebuilding the CLI, at least at the time I was doing this a few months ago.
Hello, I work on CLI full time. Things have certainly improved over the last few months on using this tool for migrating self-hosted databases.
Currently all supabase db and migration commands support --db-url flag [1] which allows you to point the CLI to any Postgres database by a connection string.
If there's any use case I missed, please feel free to open a GitHub issue and I will look into it promptly.
"We rewrote our data layer to treat the database as a simple persistence layer rather than an application. We eliminated all the triggers, stored procedures, and row-level security rules. That logic lives in the application now."
Reminds me of the article and discussion here[0] over whether to put logic in the database or not and to what degree.
"The situation becomes interesting when the vast majority of your data sits in a single logical database. In this case you have two primary issues to consider. One is the choice of programming language: SQL versus your application language. The other is where the code runs, SQL at the database, or in memory.
SQL makes some things easy, but other things more difficult. Some people find SQL easy to work with, others find it horribly cryptic. The teams personal comfort is a big issue here. I would suggest that if you go the route of putting a lot of logic in SQL, don't expect to be portable - use all of your vendors extensions and cheerfully bind yourself to their technology. If you want portability keep logic out of SQL."
Great read. Similar to my experience with Hasura. Migrations we’re better there but the row level security was a nightmare. Went to just a custom node backend with prisma and it’s a dream. No more writing tons of json rules and multiple views just to not query the email field.
Seems like these types of services are good for basic large scale crud applications, probably why you have Hasura pivoting to enterprise.
The quote at the end of going back the future is exactly how I felt. Will never use a Hasura/Supabase/etc again. Just makes things more difficult.
Had a similar experience with Hasura. They have done some amazing things leveraging Postgres and GraphQL. But there were just too many things that got really questionable. Things like migrations becoming inconsistent with metadata, schema lock in, poor ability to do rate limiting, having to use stored procedures for everything, weird SQL that had performance issues, unexplained row level locking, and so on. Local development was a total mess.
Ultimately we were making architectural decisions to please Hasura, not because it was in the best interests of what or how we were building.
Having worked on a Baas type offering, this is all very familiar. Over the years, I've come to believe the approach of trying define a service layer with these magic abstractions is fundamentally flawed and will always lead to the problems in this article: poor performance, poor local development experience, no transparency in to what is going on under the hood. They are great for fast proof of concepts, but not sustainable, long term product development.
> Render Preview Environments are amazing: they spin up an entire clone of our whole stack — frontend remix server, node api server, deno evaluation server, and now postgres database — for every pull request.
So they wanted more than a database then, no? Are they saying they really just needed a DB and the other stuff was a nice bonus? If they really wanted just a DB, are there not cheaper, and possibly simpler, options than Render?
[op]: Render is a web host, on which we host other applications. They offer a managed Postgres version, which is in my experience pretty similar to Heroku, RDS, or other managed databases.
Maybe the sentence makes that confusing - we're using other stuff on Render, which are basically "web servers" in the Heroku-ish sense, and we're also using their managed database, which is just a database. And it's nice that Render, like some other managed hosting providers, lets you boot up and connect those services.
I guess it's more than a database in some sense because it networks to our web servers and can be booted up in a preview environment, but it is mostly just a database. There are cheaper options that would be more work to wire up in such a convenient way, but the pricing difference between a database on AWS and one on Render is not the highest priority right now.
Thanks, that does clear it up. I was missing the context about Render's full offering. Makes sense re the current priority; I can see many situations or phases of a company where using Render could make a lot of sense even if there are cheaper way to get a PG database.
They just want to think about it as just a database when they are in building-the-application mindset. But they want it to have lots of conveniences and features and easy management when they are in building-the-company-and-team mindset.
PSA: Supabase Auth is based on their fork [0] of Netlify's Gotrue [1]. If you are migrating out of Supabase completely you can just drop in Gotrue for authentication.
We switched to Clerk.dev. Thankfully we had only supported magic link auth, so there wasn't much information to migrate over. Clerk has been pretty good - they have a great Remix integration and solid admin experience.
I have just begun playing with Supabase and have a habit of running `brew upgrade` several times per week. It bugs me that the Supabase CLI is updated every single time I run `brew upgrade`. I suspect that if I were to run `brew upgrade` twice a day, it would probably still update every single time. It makes me feel like I'm trying to swing a bat around, except it's made of water.
Every time I read one of these migration stories, I find myself waiting with baited breath for the part the team couldn't achieve. After finding it, the remainder of the story becomes difficult to read.
It isn't necessarily the team's fault, the developer experience clearly has room for improvement. Props to Val Town for being so honest, it is difficult to do.
I'm currently contracted on a greenfield Django REST framework app and if the decision had been up to me I probably would have gone with Supabase right off the bat. But honestly I'm absolutely loving Django REST framework over vanilla Postgres. It took me a while to get the hang of views and serializers and validation, etc, but now that I do it feels incredibly flexible and powerful. One thing I'm loving is how easy it has been for me to write management commands and build a comprehensive test suite, and that's one aspect of building a web app that I don't hear talked about much with Supabase.
Honestly, I want to like Supabase but a lot of this resonates with me even for a fairly small project. I also ended up with 3 user tables due to RLS limitations: auth users, public user profile info, and private user info (e.g. Stripe customer IDs). PostgREST's limitations also had me going back to an API server architecture because I definitely didn't want to write logic in database functions.
The only reason I haven't migrated yet is because I'd have to rewrite the data layer to use Prisma/Drizzle instead of Supabase's PostgREST client, and considering that this is a side project, the problems aren't quite big enough to justify that.
Definitely check out "choose your comfort level"[1].
> PostgREST's limitations also had me going back to an API server architecture because I definitely didn't want to write logic in database functions.
Because of PostgREST's philosophy[2], you're expected to write database functions(not necessarily SQL, since PostgreSQL offers many PLs).
So if you're not comfortable with that, you can treat PostgreSQL just as a data store and pair it up with your favorite ORM. Supabase doesn't force you to use PostgREST.
Real-world "things we ran into" stories like this are super helpful when choosing a service or technology.
Unfortunately, I have a similar experience with Firebase, where I wish I would have known that:
* Don't like the text of your Firebase Auth SMS verification message that we send on your behalf -> tough luck
* Your app name is longer than 15 characters? We are not going to include that hash in your Firebase Auth SMS message that is required by Android to perform an automatic login.
* Global Firebase Auth SMS pricing does not work you economically? Welcome to implement the whole thing yourself anyways.
* Dealing with development environments is flakey, as Firebase's emulators work 98% similar to production, but you will regularly hit things that are different.
* You can't completely automate environment creation/tear down, as not everything is covered by Terraform or Google's own APIs, so you will end up doing manual things in their admin interface.
* Real-time subscriptions in Firestore end up not being worth the tight schema coupling between client and server, as you can't control when the updates fire and you end up with a lot more unintended side effects than what this technology benefits you.
So after a year of workarounds you finally end up deeply understanding the trade-offs involved in Firebase and make the decision that its downsides exceed its out of the box benefits. :(
Wondering if the supabase CEO or any customers here can discuss scale. What “size” applications are doing really well on supabase? Are there any customers with TBs (or more) of data? What sort of performance are they achieving? Any customers with previous experience at a larger scale that are now using supabase and similar or larger scale, how are things going? What’s the average development team size of customers?
I mean i get that using such a service will somewhat speed up the initial “time to release”, but i still dont understand why i would use such a layer on top of a normal DB instead of just using a programming framework with an ORM or even a direct DB connection.
After some time you just run into limitations and have to maneuver the around weird stuff that somehow the platform has imposed on you, things that just wont happen if you just use the vanilla DB.
The passage in the article about the DB going offline during a backup every day at midnight is just insane to be honest.
Also these services typically cost much more than just self hosting a DB.
And why the hell would i ever put any amount of substantial business logic in the DB itself? Yes there are maybe speed benefits but in most cases the added burden of doing this is not necessary, compared with using actual code.
I've been evaluating both (kysely, drizzle), and I'm leaning towards Kysely. Drizzle is the latest, but both are moving fast if you look at the git insights. I find Kysely more enjoyable and straightforward -- intellisense autocompletions work better (for me at least), and the expression builder combined w/ helper methods like jsonArrayFrom offer a lot of flexibility over how to shape the output, so you're in full control, which is one of the reasons I wanted to explore alternatives to primsa in the first place. I had actually decided on kysely, but am taking another look at drizzle because of recent support for relations added. The added support is a nice addition, but there's boilerplate one needs to write to take advantage of and, frankly, I just find it easier to get same results w/ kysely, again, w/ added flexibility (it is not trying to be an orm). Some things I really like about drizzle are not needing to generate schema and lets you map column names (e.g., created_at db col maps to createAt object name). Drizzle can also infer types from schema, but, I haven't found this to be a big pro relative to kysely because 1) it takes very little effort to build a zod schema that "satisifies" the kysely type definitions and 2) I'm overriding the drizzle inferred types anyway to get the finally runtime checks implemented (e.g., is a cuid2 of 16 len, not just is a string). I've also been using prisma-kysely, which gives me ubiquitously supported prisma tooling for handling migrations, etc.
edit: while not exhaustive, I'm seeing better perf (by about 20%) using kysely compared to drizzle for identical queries on planetscale. Take this with a grain of salt since I've made no attempt to measure exhaustively, or optimize -- just using "as-is" so to speak, but thought it would be with noting and nothing to indicate to me that drizzle offers big perf improvement over kysely as has been suggested. Thanks for this post btw. Drizzle is hot and shiny right now (Theo just promoted big time), but after taking a second look at drizzle and in the process of writing up my thoughts here, it's become clear to me that I'm sticking w/ kysely.
One minor correction (sort of), just noted in docs that kysely provides a built-in camel case plugin, for transforming camel to snake case so, eg.g, createdAt to created_at in db. Not as flexible as arbitrary transform but it serves my needs perfectly.
Sure! I think Kysely is great too, but went with Drizzle for a few different reasons:
Kysely is a little more established than Drizzle, which I think is one of the major reason why it has broader adoption. My bet is that Drizzle is moving really fast, gaining adoption, and might catch up at some point. It's also - in terms of performance - super fast, and nicely layers on top of fast database clients.
Some of the differences that I liked about Drizzle were the extra database drivers being core and developed as part of the main project. It supports prepared statements, which is awesome. The Drizzle API also covers an impressive percentage of what you can do in raw SQL, and when there's something missing, like a special column type, it's been pretty straightforward to add.
I prefer the way that it lets us write parts of queries, and compose them - like you import expressions like "and" and "eq" and you can write and(eq(users.id, 'x'), eq(users.name, 'Tom')) and you can actually stringify that to the SQL it generates. Or you can do a custom bit of SQL and use the names of table columns in that, like `COUNT(${users.name})`. I can't say scientifically that this is superior, and it's almost a little weird, but I've really found it a nice way to compose and debug queries.
That said, Kysely is also a great project and it'd be possible to build great products with it, too. I just found the momentum, API, and philosophy of Drizzle to be pretty compelling.
> It's also - in terms of performance - super fast
Kysely is also super fast. Your bottleneck will always be database requests. If you're chasing every milli, why node.js?
> the extra database drivers being core and developed as part of the main project.
Kysely's dialects are dead simple to implement on your own. As evident by all the 3rd party dialects being open-sourced and all the comments from people using Kysely in production with stuff like cockroachdb, mariadb, clickhouse and such.
Its unhealthy to maintain niche database knowledge in the core. We just don't have the time (FYI we do this for fun, not trying to catch all the sponsors and get VC funded) to play around with all of these technologies, and stay up-to-date with changes.
Both Sami and I have submitted pull requests in 3rd party dialect repositories in the past. I maintain a few dialects on my own.
> It supports prepared statements, which is awesome.
In connection pooling scenarios Kysely was mainly built for, prepared statements are arguably "not that great". In FaaS, a burst of requests might make your database work extra hard, as each new lambda instance comes with brand new connection/s.
> I prefer the way that it lets us write parts of queries, and compose them - like you import expressions like "and" and "eq" and you can write and(eq(users.id, 'x'), eq(users.name, 'Tom')) and you can actually stringify that to the SQL it generates. Or you can do a custom bit of SQL and use the names of table columns in that, like `COUNT(${users.name})`. I can't say scientifically that this is superior, and it's almost a little weird, but I've really found it a nice way to compose and debug queries.
This has been part of Kysely for a while now, and is only getting stronger with new `ExpressionBuilder` capabilities. The fun part is, you don't have to import anything, and are not coupled to your migration code.
Personally I like both projects, as I hope I made clear in the OP - I sense that there's some history and strife here that I'm not clued into as an outsider.
My personal problem with Kysely is that the migrations are not aligned with what I needed personally.
I would have wanted to see Kysely have the ability to generate migrations for example. I also personally prefer the approach that Drizzle takes when it comes to more adoption (in my case, CockroachDB).
Just a personal preference - the project is awesome.
Drizzle-kit, the migration part of drizzle is not open source, though they said they will open source it in future, but not at this point. Kysley is 100% open source, feature rich and more stable, again back to active development.
atlasgo io looks promising to handle migrations and is open source as well. I am currently using prisma.
What was most shocking to me was that it took a week to migrate 40GB.
I once migrated 1TB from RDS Oracle to RDS Aurora MySQL in 6 hours. I'm not familiar with Supabase, maybe there's a lot more to the data migration process?
Supabase developer here. It shouldn't (and doesn't) take a week to migrate 40GB, I'm sure most of that time was strategizing, analyzing, and testing things. Supabase is pure Postgres running on AWS, so migrations are pretty straightforward. Things mostly depend on where you're migrating to/from, and the network latency between the source and destination. 40GB should take minutes in most cases.
There were a couple of factors why it took a week:
1. We wanted to avoid downtime, so the pg dump was slowed down because it was happening alongside production use of the db
2. We abuse postgres in a couple of ways (too many large json columns) which makes it harder to export and import
3. We were moving between cloud regions and cloud providers.
4. I'm a bit of a database ops noob (part of why supabase was appealing in the first place) so I had to learn how to do all these things. Like burggraf said, a lot of that week was planning, trial and error, test runs, mistakes that would cost full days, etc.
Can someone explain a bit better what the issues are. What exactly are the issues with migration if you use an SQL script to do the migration instead of the supabase interface?
For developers who have worked with databases before, SQL migrations might be obvious. But for many of our audience it's not. We'll adapt the interface to make this pattern more front-and-center. We also need to improve our CLI to catch up with other migrations tools because a lot of our audience haven't used established tools before (flyway, sqitch, alembic, etc)
I also had a tough time working w/ an app someone else built on Supabase. We kept bumping up against what felt like "I know feature X exists in postgres, but it's 'coming soon' in Supabase." IIRC the blocker was specific to the trigger/edge function behavior.
However after reflecting more, I don't remember enough to make a detailed case. Perhaps the issue was with our use of the product.
> "I know feature X exists in postgres, but it's 'coming soon' in Supabase."
There is no feature that exists in postgres that doesn't already exist in Supabase. In case it's not clear, supabase is just Postgres. We build extensions, we host it for you, and we build tooling around the database. Our Dashboard is one of those tools, but there is always an escape hatch - you can use it like any other postgres database, with all the existing tooling you're most comfortable with.
Thanks for the response. I do recall hitting some product limitations (a webhooks "beta" that we tried to use but hit a blocker). Reflecting more, I don't recall the supporting details specifically enough though. Edited original post and apologies for the added noise.
> The CLI manages the Supabase stack locally: Postgres, gotrue, a realtime server, the storage API, an API gateway, an image resizing proxy, a restful API for managing Postgres, the Studio web interface, an edge runtime, a logging system, and more – a total of 11 Docker containers connected together.
Can Supabase author a set of Kubernetes manifests similar to what they run in production, and perhaps distribute those?
This is not from Supabase, but as a community contribution. See upthread [1]: "at StackGres we have built a Runbook [2] and companion blog post [3] to help you run Supabase on Kubernetes."
Assuming you’re talking about Supabase, I kind of disagree.
There’s an initial learning curve with the row level security stuff, but once you get a good grasp of it and come up with a few patterns that suit your needs it’s insanely fast to develop on. You’re trading the time it takes to build and manage an api for the time it takes to setup RLS.
I'd say Supabase is great at spinning up CRUD apps. If anything, this article could be summarized as "Because Val Town is much more than a CRUD app, they had a harder time with Supabase than the average."
Mostly that solves setting up Auth and Prisma SQL ORM to your DB, but Next.js App directory with the Prisma setup (2 files / 50 LOC) done is even smoother.
the Val Town team were kind enough to share this article with me before they released it. Perhaps you know from previous HN threads that we take customer feedback very seriously. Hearing feedback like this is hard. Clearly the team at Val Town wanted Supabase to be great and we didn’t meet their expectations. For me personally, that hurts. A few quick comments
1. Modifying the database in production: I’ve published a doc on Maturity Models[0]. Hopefully this makes it clear that developers should be using Migrations once their project is live (not using the Dashboard to modify their database live). It also highlights the options for managing dev/local environments. This is just a start. We’re building Preview Databases into the native workflow so that developers don’t need to think about this.
2. Designing for Supabase: Our goal is to make all of Postgres easy, not obligatory. I’ve added a paragraph[1] in the first page in our Docs highlighting that it’s not always a good idea to go all-in on Postgres. We’ll add examples to our docs with “traditional” approaches like Node + Supabase, Rails + Supabase, etc. There are a lot of companies using this approach already, but our docs are overly focused on “the Supabase way” of doing things. There shouldn’t be a reason to switch from Supabase to any other Postgres provider if you want “plain Postgres”.
3. That said, we also want to continue making “all of Postgres” easy to use. We’re committed to building an amazing CLI experience. Like any tech, we’re going to need a few iterations. W’re building tooling for debugging and observability. We have index advisors coming[2]. We recently added Open Telemetry to Logflare[3] and added logging for local development[4]. We’re making platform usage incredibly clear[5]. We aim to make your database indestructible - we care about resilience as much as experience and we’ll make sure we highlight that in future product announcements.
I’ll finish with something that I think we did well: migrating away from Supabase was easy for Val Town, because it’s just Postgres. This is one of our core principles, “everything is portable” (https://supabase.com/docs/guides/getting-started/architectur...). Portability forces us compete on experience. We aim to be the best Postgres hosting service in the world, and we’ll continue to focus on that goal even if we’re not there yet.
[0] Maturity models: https://supabase.com/docs/guides/platform/maturity-model
[1] Choose your comfort level: https://supabase.com/docs/guides/getting-started/architectur...
[2] Index advisor: https://database.dev/olirice/index_advisor
[3] Open Telemetry: https://github.com/Logflare/logflare/pull/1466
[4] Local logging: https://supabase.com/blog/supabase-logs-self-hosted
[5] Usage: https://twitter.com/kiwicopple/status/1658683758718124032?s=...