> The key is that AI needs to have a fiduciary duty to its users: it should always suggest the most economically advantageous deal for the user, not what is best for the AI if those are not aligned.
This generally means you get what the provider can prove seems like the most sensible choice to a regulator, which would mean not being an AI because you can't show how that algorithm works.
This generally means you get what the provider can prove seems like the most sensible choice to a regulator, which would mean not being an AI because you can't show how that algorithm works.