> My wife double checked because she still "doesn't trust AI", but all her verification almost 100% matched Claude's conclusions
She's right not to trust it for something like this. The "almost 100%" is the problem (also consider that you're sending personal data to anthropic without permission) especially for something like this where it might mean discarding someone's resume, which is something that could have a significant impact on a person's life.
This is a pretty wild claim, so I think it is fair to be critical of the examples given:
- Driftless sounds like it might be better as a claude code skill or hook
- Deploycast is an LLM summarization service
- Triage also seems like it might be more effective inside CC as a skill or hook
In other words all these projects are tooling around LLM API calls.
> What was valuable was the commitment. The grit. The planning, the technical prowess, the unwavering ability to think night and day about a product, a problem space, incessantly obsessing, unsatisfied until you had some semblance of a working solution. It took hustle, brain power, studying, iteration, failures.
That isn't going to go away. Here's another idea: a discussion tool for audio workflows. Pre-LLMs the difficult part of something like this was never code generation.
You really know what a good interface should be like, this is really inspiring. So is the design of everything I've seen on your website!
I won't pile on to what everyone else has said about the book connections / AI part of this (though I agree that part is not the really interesting or useful thing about your project) but I think a walk-through of how you approach UI design would be very interesting!
Same thing happens to me in long enough sessions in xterm. Anecdotally it's pretty much guaranteed if I continue a session close to the point of context compacting, or if the context suddenly expands with some tool call.
Edit: for a while I thought this was by design since it was a very visceral / graphical way to feel that you're hitting the edge of context and should probably end the session.
If I get to the flicker point I generally start a new session. The flicker point always happens though from what I have observed.
> if you do like to discover new music, self-hosting just isn't an option
Sure it is. Music discovery via algorithmic services is not the only way. There's radio, talking to people who have similar interests, reading interviews with musicians who talk about other music they like, browsing selections at the library, reading books about music or musicians, even just reading the liner notes for an album, noticing some players you like and finding other things they've worked on, and on and on and on. It doesn't have to be high effort, it's not instant, but it works great.
We still sandbox, quarantine and restrict them though, because they can't really behave as agents, but they're effective in limited contexts. Like the way waymo cars kind of drive on a track I guess? Still very useful, but not the agents that were being sold, really.
Have you been in a Waymo recently or used Tesla FSD 14.2? I live in Austin and my Model 3 is basically autonomous - regularly going for hours from parking space to destination parking space without my touching the steering wheel, navigating really complex situations including construction workers using hand motions to signal the car.
She's right not to trust it for something like this. The "almost 100%" is the problem (also consider that you're sending personal data to anthropic without permission) especially for something like this where it might mean discarding someone's resume, which is something that could have a significant impact on a person's life.
reply