The frustrating thing is that companies will likely be getting away with stating falsehoods and/or not following through on what the AI claimed they'll do.
Like how in the old days, if you sent a company a contract offer and they accepted it, they would be bound by it. Today, if you send a request to a company's server that they didn't expect and they confirm the contract, they're still later able to renege since you "hacked them" by removing some client-side constraints.
In the same way, I imagine that companies will claim no responsibility for what their AI says, essentially making the experience much worse for consumers. Today, if a customer service rep states something, the company is mostly bound to it. In the future, they'll just claim it was an "AI bug" or a "query injection attack" so they won't honor it.
> The frustrating thing is that companies will likely be getting away with stating falsehoods and/or not following through on what the AI claimed they'll do.
This has been a key feature of digitization generally, delegate responsibility to a computer and remove the autonomy of the human worker. We all just accept that computers screw up so we don't expect better.
Like how in the old days, if you sent a company a contract offer and they accepted it, they would be bound by it. Today, if you send a request to a company's server that they didn't expect and they confirm the contract, they're still later able to renege since you "hacked them" by removing some client-side constraints.
In the same way, I imagine that companies will claim no responsibility for what their AI says, essentially making the experience much worse for consumers. Today, if a customer service rep states something, the company is mostly bound to it. In the future, they'll just claim it was an "AI bug" or a "query injection attack" so they won't honor it.