Hacker News new | past | comments | ask | show | jobs | submit login

A river kinda has access to the real world a little bit. (Referring to the other part of the argument.)



And a LLM-bot can have access to internet which connects it to our real world, at least in many places.


Also it has access to people. It could instruct people to carry out stuff in the real world, on its behalf.


OpenAI's GPT-4 Technical Report [0] includes an anecdote of the AI paying someone on TaskRabbit to solve a CAPTCHA for it. It lied to the gig worker about being a bot, saying that they are actually a human with a vision impairment.

[0] https://cdn.openai.com/papers/gpt-4.pdf


For reference, this anecdote is on pages 55/56.


Additionally, commanding minions is a leverage point. It's probably more powerful if it does not embody itself.


That makes me think, why not concentrate the effort on regulating the usages instead of regulating the technology itself? Seems not too far fetched to have rules and compliance on how LLM are permitted to be used in critical processes. There is no danger until it's plugged on the wrong system without oversight.


sounds like a recipe for ensuring AI is used to entrenche the interests of the powerful.


A more advanced AI sitting in AWS might have access to John Deere’s infrastructure, or maybe Tesla’s, so imagine a day where an AI can store memories, learn from mistakes, and maybe some person tells it to drive some tractors or cars into people on the street.

Are you saying this is definitely not possible? If so, what evidence do you have that it’s not?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: