Yeah, that was one of the inspirations for this. That one does not work in the browser though, would be a good project to do that same thing but with sqlite in wasm and integrated with WebTorrent instead of a native torrent program.
I actually did also implement a similar thing fetching data on demand from WebTorrent (and in turn helping to host the data yourself by being on the website): https://phiresky.github.io/tv-show-ratings/ That uses a protobufs split into a hashmap instead of SQLite though.
There are a bunch of *-to-sqlite utilities in corresponding dogsheep project.
Arrow JS for 'paged' browser client access to DuckDB might be possible and faster but without full SQLite SQL compatibility and the SQLite test suite. https://arrow.apache.org/docs/js/
In-browser notebooks like Pyodide and Jyve have local filesystem access with the new "Filesystem Access API", but downloading/copying all data to the browser for every run of a browser-hosted notebook may not be necessary.
https://web.dev/file-system-access/
I'd also version the DB in the URL, else you could end up changing the file out from under someone who's already got the page loaded, with who-knows-what results depending on how different the file is. You could just prefix a head to every range request to check for changes, but that adds overhead and doesn't actually completely close the gap, so it'd still be possible to read a file different from the one you intended. Cost is more disk usage, depending on how many copies you keep around, but at least keeping the most recent "old" version seems reasonable unless you're skating really close to the quota on whatever system you're using.
Sort of. Access had a "Forms" feature that let you create basic GUIs on top of your database. Also, the OP's project is (currently) only providing a read-only view of the SQLite database. Adding write support is possible but will be far less impressive to the HN crowd because SQLITE_BUSY will rear its ugly head ;-)
I'm curious, in what manner could this method speed up Next.js builds? That's all done locally, which negates the effect of HTTP range requests, right?
I'm guessing they mean rather than build a static Next site that generates 10k+ pages (or whatever large means in the given context), it instead creates one page that just queries the data from the client.
I have one Next static site that has about 20k pages and takes about 20 minutes to build and deploy. I think that's an acceptable build time. But I do know of other people around the net who have mentioned having sites with 20k-ish pages taking an hour+ to build. For them I could see the desire to try this sqlite trick.
You should write a post about this if you implement it. My humble suggestion for blog post title - 'Speed up your Next.js builds with this author's one weird trick! Vercel hates him!'
Its not the same because you have to rebuild all the pages if you change your data source. In this implementation you can upload a new data set and it will work.
Generating 20k pages in 20mins is impressive, 16 pages a second on average.
In my experience, it can take a couple of minutes just to deploy 20 pages, but that could just be the overhead of Typescript and SASS compilation too...
I wonder when people using next.js will start using this for faster builds for larger static sites?