> if you're trying to keep things in sync efficiently
I think we are all abusing GraphQL in this sense. Caching GraphQL data is a world of hurt.
The easiest way to sync is if your data is in the same model as the way it is stored, which is normalized if using SQL. Even when we do normalize most people will represent a 1-M as an array of foreign keys: `posts.comments = [1, 2, 3]` when in the db we use a foreign key on the `comment` entity: `comment.post_id = 1`. This causes such headaches for optimistic UI updates. It's a mess.
We should be using GraphQL to retrieve data and then render it directly.
Ultimately, I think we should be implementing our GraphQL API client-side against a local SQL DB that acts as a pass-through cache. Then your entire app runs offline and instantly respond to queries eliminating the need for a separate client-side cache, and you don't have to worry about about data model mismatch anymore.
> Depends on how much
I think all apps want this - unless they are SSRing with <100ms on every action.
I'd love to get rid of all these client-side state management libraries and just drop an SQL DB in there and allow components to subscribe to changes to queries on record updates, and then sync it to a cloud DB.
One interesting problem I came across is how to know when your local DB has enough records to be able to run a given query. E.g. If your team has 100K tasks in their cloud DB, and you run `SELECT * FROM tasks WHERE tasks.assignee = user.id`, and then another query of `SELECT * FROM tasks WHERE tasks.assignee = user.id AND tasks.completed = true`, the second task is a subset of the first, so it doesn't need to hit the server except to check if any updates were made to the tasks table. Then apply this to all queries including ones with joins. Actually some fun relational algebra can be used figure it out - one of the perks of working with SQL. The problem is actually similar to how to [incrementally maintain a materialized view](https://wiki.postgresql.org/wiki/Incremental_View_Maintenanc...) which Postgres actually does not support (Oracle does).
I think we are all abusing GraphQL in this sense. Caching GraphQL data is a world of hurt.
The easiest way to sync is if your data is in the same model as the way it is stored, which is normalized if using SQL. Even when we do normalize most people will represent a 1-M as an array of foreign keys: `posts.comments = [1, 2, 3]` when in the db we use a foreign key on the `comment` entity: `comment.post_id = 1`. This causes such headaches for optimistic UI updates. It's a mess.
We should be using GraphQL to retrieve data and then render it directly.
Ultimately, I think we should be implementing our GraphQL API client-side against a local SQL DB that acts as a pass-through cache. Then your entire app runs offline and instantly respond to queries eliminating the need for a separate client-side cache, and you don't have to worry about about data model mismatch anymore.
> Depends on how much
I think all apps want this - unless they are SSRing with <100ms on every action.