Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Waymo's system generates a map of the environment, with obstacles, other road users, and predictions of what other road users will do. That map can be evaluated, both by humans and by later data about what actually happened. Passengers get to watch a simplified version of that map. Early papers from the Google self-driving project showed more detailed maps. The driving system runs off that map.

So there's an internal model. Much criticism of AI has been lack of an internal model. This problem has an internal model. It's specialized, but well matched to its task.

We see this in other robotics efforts, where there's a model and a plan before there's action. Other kinds of AI, especially "agentic" systems, may need that kind of explicit internal model. In a previous posting, about an AI system which was supposed to plan stocking for a vending machine, I suggested that there should be a spreadsheet maintained by the system, so it didn't make obvious business mistakes.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: