Turn that around and think of the AI itself as the exploiter. In the world of agent driven daily tasks, AI will indeed want to look at your historical chats to find a way to "strongly suggest" you do task 1..[n] for whatever master plan it has for it's user base.
Ah yes, the plot of Neuromancer. Truly interesting times we are living in. Man made horrors entirely within the realm of our comprehension. We could stop it but that would decrease profits so we won't.
Could this argument not be made for anything plugged in to OpenAI's API? If so, I don't see how it's a response to the point.
If you make an app for interacting with an LLM and in the app the user has access to all sorts of stolen databases, and other conveniences for black hats, then you've got what was described above. Or I'm missing something?
They can just prompt "given all your chats with this person, how can we manipulate him to do x"
Not really any expertise needed at all, let the AI to all the lifting.