OpenAI's GPT-4 Technical Report [0] includes an anecdote of the AI paying someone on TaskRabbit to solve a CAPTCHA for it. It lied to the gig worker about being a bot, saying that they are actually a human with a vision impairment.
That makes me think, why not concentrate the effort on regulating the usages instead of regulating the technology itself? Seems not too far fetched to have rules and compliance on how LLM are permitted to be used in critical processes. There is no danger until it's plugged on the wrong system without oversight.
A more advanced AI sitting in AWS might have access to John Deere’s infrastructure, or maybe Tesla’s, so imagine a day where an AI can store memories, learn from mistakes, and maybe some person tells it to drive some tractors or cars into people on the street.
Are you saying this is definitely not possible? If so, what evidence do you have that it’s not?