Isn't the saved state still being sent as part of the prompt context with every prompt? The high token count is financially beneficial to the LLM vendor no matter where it's stored.
The saved state is sent on each prompt, yes. Those who are fully aware of this would seek a local memory agent and a local llm, or at the very least a provider that promises no-logging.
Every sacrifice we make for convenience will be financially beneficial to the vendor, so we need to factor them out of the equation. Engineered context does mean a lot more tokens, so it will be more business for the vendor, but the vendors know there is much more money in saving your thoughts.
Privacy-first intelligence requires these two things at the bare minimum:
1) Your thoughts stay on your device
2) At worst, your thoughts pass through a no-logging environment on the server. Memory cannot live here because any context saved to a db is basically just logging.
3) Or slightly worse, your local memory agent only sends some prompts to a no-logging server.
The first two things will never be offered by the current megacapitalist.
Finally, the developer community should not be adopting things like Claude memory because we know. We’re not ignorant of the implications compared to non-technical people. We know what this data looks like, where it’s saved, how it’s passed around, and what it could be used for. We absolutely know better.
Governments change. Any group in society has the potential to become marginalized, and if all services are funneled through a single system it becomes very easy to selectively switch off access.
In my experience this approach is kicking the can down the road. Tech debt isn't paid down, it's being added to, and at some point in the future it will need to be collected.
When the agent can't kick the can any more who is going to be held responsible? If it is going to be me then I'd prefer to have spent the hours understanding the code.
This is actually a pretty huge question about AI in general
When AI is running autonomously, where is the accountability when it goes off the rails?
I'm against AI for a number of reasons, but this is one of the biggest. A computer cannot be held accountable therefore a computer must never make executive decisions
The accountability would be in whoever promoted it. This isn't so much about accountability, as it is who is going to be responsible for doing the actual work when AI is just making a bigger mess.
The accountability will be with the engineer that owns that code. The senior or manger that was responsible for allowing it to be created by AI will have made sure they are well removed.
While an engineer is "it" they just have to cross their fingers and hope no job ending skeletons are resurrected until they can tag some other poor sod.
School left me with the impression that hieroglyphs were primitive constructs - purely logographic and ideographic. It was a shock to later learn that they are also alphabetic and phonetic.
The opportunities for creative expression are amazing in such a system
Yes, the system is reminiscent of written Japanese in that way in that a word is sometimes spelled out phonetically, sometimes with an ideograph, and sometimes both for good measure if one or the other isn't viewed as clear enough.
Come on.. anything of interest can be reduced to a few absurd actions absent of any context.
For example.
I don't understand how:
- moving pieces of wood around a board can have mass appeal
- Smearing colored paint on material can have mass appeal
- Smashing sticks against covered buckets can have mass appeal
Those articles are just using the same examples (often verbatim) from the official docs. It's obvious that the authors haven't actually developed anything themselves.
There may be a lot of quality material out there, and it's just hidden under the mountain of low effort scraped, copied & AI content
Who's voice are you using when adding your hand crafted prose? Mimicking the style of the 80% or switching to your own?
Perhaps I'm a Luddite, or just in the dissonance phase toward enlightenment, but at the moment I don't want to invest in AI fiction. A big part of the experience for me is understanding the author's mind, not just the story being told
Plot twist, people who do first drafts and structural edits with AI can still do line edits and copy edits by hand for personal voice (and you have to anyhow if you want the prose to be exceptional).
I think there’s only a very small subset of fiction that uses the prose to that extent. Much like code really. If you are writing original algorithms you cannot use the LLM. If you are just remixing existing ones, it becomes a lot more useful.
Also, I guess I missed the brunt of your question, though the answer is similar. Most voice works for most characters. There’s only so many ways to say something, but occassionally you have to adjust the sentence or re-prompt the whole thing (the LLM has a tendency to see the best in characters).
Perhaps on a relative scale "most" fiction doesn't carry any sort of deeper meaning, but if you look at things like "Hugo or Nebula Award nominees" (to pluck out the SF/F genre as a category), I'd say that almost every single one of them, going back all those decades, has something more to say than just their straightforward text.
And unless reading is your day job or only hobby, that's a massive, massive corpus of interesting text. (In just one genre! There are more genres!) So on an absolute scale, there is so much fiction to read with more-than-surface-level meaning that I personally just don't understand why anyone would have the least interest in reading AI slop.
(I also don't have any real interest in most Kindle Unlimited works, probably for similar reasons. Though I am quite certain there are diamonds there, I've just not had particularly much time for/good luck at finding them.)
Sure, but that more than surface level meaning comes out in the story, not often in the specific way the sentences are written (I acknowledge those exist, I just don’t consider them the majority).
Also, you say you don’t understand why anyone would be interested in the AI slop. But from the article we learn that one is indistinguishable from the other (apparently even to the one professional author that tried)
I was disappointed that the results shown didn't break things down a bit more between star ratings given and authorship guesses. I didn't think any of these stories were amazing flash fiction, and I think that's relevant here. I'm curious to know what people who liked them all, or at least really liked one of them, had to say on judging AI-vs-human.
> A big part of the experience for me is understanding the author's mind, not just the story being told
AI content is really exposing how people fall into a group that does go further than the surface text into deeper layers of context/subtext, and a group that doesn't.
Agreed. The level of mediocrity and group-think on LI is freighting. It’s no wonder so many companies / brands struggle.
Daily I see an OP based on myth / incomplete ideas (read: ultimately the originator is sharing bad advice)and then 95% of the replies to that mindlessly agree. The flaws are often obvious, and no one notices.
It probably gets that way because nobody wants to be the one to argue back, as it puts them in a bad light. So what's left are cheap platitudes and confirmations.
I understand the unwillingness to argue. But we’re talking foundational flaws in the advice being offered. Even passive aggressive “Are you sure about… ?” Would be 10x better than shameless group-think.
Unless its a safe enough disagreement. Lynching on an overconfident but incorrect post for example. As long as correcting it makes you look smart, hard working, a leader.