Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Lowstorage: JSON-based database for Cloudflare Workers and R2 buckets (github.com/good-lly)
89 points by neon_me on Dec 3, 2023 | hide | past | favorite | 17 comments


I highly recommend anyone using this to use a Durable Object https://developers.cloudflare.com/durable-objects/ for writes as this project doesn't seem to have consistency guarantees. If the Worker that writes to a collection happens to both do a write at the same time the data will be whatever the last Worker to write set it to.

Very cool library for read-centric use cases but I'd be very careful using this for use cases with frequent writes.


I tried using durable objects and the API is sooooo confusing I ended up just moving to a firebase’s firestore.

I know they provide different features but I just needed a nosql type of storage so I didn’t care.


Couldn't the library implement a conditional operation using the etag, retrying if it fails? That should solve the consistency guarantee.

Ofc its a different matter if you write to the same key in the same file. But that could be solved with special method that allows you to do a setOrUpdate on a key in the json and when retrying the update method is executed again (allowing you to do increments or add a new item to an array) while guaranteeing consistency.


Consider the case of two workers both trying to write the same key at once. Which one should succeed?


If you use etags it's up to R2. I'm assuming R2 is consistent here.

So if you write to the same file twice, one write will succeed before the other and change the etag of the file, which should in theory cause the other request to R2 to fail because the etag you send with as conditional no longer matches.


Its up to your implementation. Which is very similar to any other "average" database ...


Slightly OT but I found the experience of maintaining and deploying workers to be terrible and so monorepo-unfriendly due to wrangler forced usage for deploys (or at least I haven't found better ways) which has implications on how the project gets bundled and deployed.

I wish there was a simple stupid "just drop an artifact with a main.js" file and that's it.

Cloudflare is one of those companies I really want to like, but as a developer I just don't think they care. They want to chase big guys with big contracts that can get the love and support average Joe does not.

They seem to strongly underestimate the traction and money they could get by giving engineers rather than sales and accounts some more love.

And thus we have moved our stack, initially built around Cloudflare, to Azure. It also helps Microsoft throws lots of credits at startups and Cloudflare does not. But DX rather than money has been the biggest reason we moved off CF offerings.


I have the exactly opposite experience of yours. Cloudflare DX is by far the best among the cloud providers I've tried. Today, it's not that complicated to setup wrangler with credentials to your Cloudflare account in any mayor CI. If you have multiple services in your repo you can have multiple directories with each containing its own wrangler.toml. You can even connect these services with service bindings (beta feature still).

In my mind Cloudflare gets the big picture of DX exactly right. When deploying your service the deployment itself is not stateful, there's only one step to it: 'wrangler deploy'. Compare that to AWS Lambda: 1. build you code, 2. pack into a zip file, 3. make sure the S3 bucket exists, 4. upload the zip to the bucket, 5. deploy lambda, 6. update API gateway... And don't even get me started on Cloudfront - every change takes like 5min to apply.

Every so often I do encounter some rough edges, like service bindings binding to deployed services or workerd binary dynamically linking to some shared libraries so wrangler installed through npm doesn't work on Nixos etc. That said I respect the approach Cloudflare team is taking by focusing on the really important stuff first.


> Compare that to AWS Lambda: 1. build you code, 2. pack into a zip file, 3. make sure the S3 bucket exists, 4. upload the zip to the bucket, 5. deploy lambda, 6. update API gateway... And don't even get me started on Cloudfront - every change takes like 5min to apply.

For AWS are you aware of SAM for CloudFormation[1]? The CDK[2]? You picked the best representation of Cloudflare (via `wrangler.toml`) but the worst representation for AWS of the developer experience. `cdk deploy` is pretty seamless and similar to `wrangler deploy` [3].

1 - https://docs.aws.amazon.com/serverless-application-model/lat...

2 - https://docs.aws.amazon.com/cdk/v2/guide/home.html

3 - https://docs.aws.amazon.com/cdk/v2/guide/serverless_example....


The CDK is incredibly Byzantine and assumes you have a lot of in depth knowledge of AWS services.

I can learn Cloudflare workers in a few hours by contrast.

I hate the CDK because it’s a terrible undisciplined undiscoverable interface to their services.

I wish AWS wasn’t everyone’s default cloud choice, it’s DX is horrendous


Perhaps you should redo the calculation.

As soon as azure credits stop, it gets really expensive really fast and migrating off is labor intensive ( been there, done that and I regret going through it).

In comparison with cloudflare, which is relatively cheap.

Ps. Uploading a zip to Cloudflare: https://developers.cloudflare.com/pages/get-started/direct-u...


I tried the zip upload once and it was really messy. The MIME types were not set properly (unlike wrangler) and they polluted the CF cache so even after I redeployed with wrangler it was still serving the assets with the wrong MIME types.

I think it was for WebP files but I can’t remember.


interesting - I found CF very interesting and with nice growing tooling (ofc, still a lot of space to grow). But... I have no experience with Azure - I was always struck by businesses like Oracle - which expand into SaaS/PaaS/cloud biz but provide zero tools for devs/"makers".


interesting. I can see how this differs from D1 and kv store, but what usecase for doing it this way? was there something kv store and D1 was not enough for?


cheapskate basically - kv offers only 1GB free tier and R2 is very flexible - but limited in ops.


Is there a design reason why the example does “new lowstorage” on each hono endpoint?


you want to get context (c) and its env inside function to get to R2 instance - in vanilla worker code it would be wrapped inside async fetch(request, env, ctx)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: