I prefer audit tables. Soft deletes don't capture updates, audit tables do (you could make every update a delete and insert in a soft delete table, but that adds a lot of bloat to the table)
Deleting data is also a very easy way to not get GDPR compliance issues. Data is a cost and a risk, and should be minimised to what is actually relevant. Storage is the least part of the cost.
The data in the graph from this next post shows an inflation adjusted per brick price of .40 USD in 1980 vs a little above $.10 now. Perhaps more interesting is the cost per gram analysis which also shows a large drop.
I think people tend to romanticize the past and underestimate the effect of inflation across decades. One thing that may contribute to the idea that Lego is now too expensive is that the average sets seem to be larger and more complex now. Even if the bricks are cheaper the sheer quantity of them will inherently raise the set price. That may explain why the data in the Reddit post shows average median set cost having risen even while per brick cost has decreased.
Tip: Look for someone selling their grownup children’s Lego collection. I recently found a couple selling their children’s old Lego collection in Facebook Marketplace. I got an enormous bag of them for just a few bucks. It was a headache to filter out the garbage in them (small non-LEGO toys, unique pieces that were not really useful, a few mixed mega blocks, broken pieces, etc) but it was worth it, my children love them!
I had it One-shot the full architecture for a fairly advanced distributed system for a client. It then one shot the actual code design (following absolutely all our our internal requirements on auth, stack to use, security, code styling, documentation, etc). It then one shot (and we code reviewed everything thoroughly) each of the 5 micro services needed.
It one shot the infrastructure to use and created the terraform file to put it up anywhere. It deployed it.
It caught some of the errors it had made by itself after load-testing, and corrected them. It created the load test itself (following patterns from previews projects we had).
It did all of this in a week. With human supervision on each step, but in a fucking week. We gave it all the context it needed and one-shotted everything.
It is more than god-level. If you are not getting these increases in productivity, you are using it wrong.
Hey would you be willing to share your claude.md? I'm only starting out with AI coders, and while it often makes good choices for straightforward things, I find the token usage gets bigger and bigger as it proceeds down a list of requirements - my working hypothesis is that it's having to re-read everything as the project gets more complicated and doesn't have a concept of "this is where I go to kick it for this kind of thing".
Lol ok dude, good luck with your 'I just resell the output of Claude and I can't tell when it makes mistakes' business model. I'm sure it is a long term valid economic niche.
Now, I resell the output of AI supervised by engineers.
We can tell when it makes mistakes. It used to make a ton. Now, with the right context, it really makes very few mistakes (which it can find itself and fix itself)
Perhaps this is about commit granularity. If keeping the history about advancing the task is not useful, then I’d merge these commits together before merging the PR; in some workflows this is set up to happen automatically too.
You have got to have some extremely large files or something. Even with only Opus, running into the limits with the Max subscription is almost impossible unless you really try.
Storage is cheap. Never delete data.