I thought the lack of ability to execute on current “easy” queries would indicate something about ability to execute something as complicated as figuring out the restaurant you ate at and making a reservation. At least anytime in the next few years.
I don’t think it does. This isn’t a hypothetical Siri v2 with some upgrades; it’s a hypothetical LLM chatbot speaking with Siri’s voice. I recall one of the first demonstrations of Bing’s ability was someone asking it to book him a concert where he wouldn’t need a jacket. It searched the web for concert locations, searched the web weather information, picked a location that fit the constraint and gave the booking link for that specific ticket. If you imagine an Apple LLM that has local rather than web search, it seems obvious that this exact ability that LLMs have to follow complicated requests and “figure things out” would be perfectly suited to reading your emails and figuring out which restaurant you mean. With ApplePay integration it could also go ahead and book for you.
Certainly not the only place, but you’re very right that it does house a large population of commenters like me who enjoy the “sport” of “being correct on the internet”.
And yet the parent makes a very specific (and correct) comment, that this wont be Siri with some upgrades, but Siri in the name only, with a totally different architecture.
Whereas yours and your sibling comment are just irrelevant meta-comments.
Siri today is built on what’s essentially completely different concepts from something like ChatGPT.
There are demos of using ChatGPT to turn normal English into Alexa commands and it’s pretty flawless. If you assume Apple can pretty easily leverage LLM tech on Siri and do it locally via silicon in the M3 or M4, it’s only a matter of chip lead time before Siri has multiple orders of magnitude improvement.
That experience likely isn’t transferable to Siri, that has deeper problems. People, me included, are reporting their problems with Siri, e.g. setting it to transcribing what they and Siri says as text on the screen, and then are able to show that given input as “Please add milk to the shopping list” results in Siri responding “I do not understand what speaker you refer to.”, in writing.
Likely problems like these could be overcome, but preparing better input would probably not address the root cause of the problems with Siri.
Microsoft voice assistant was equally dumb as Siri, but ChatGPT is another thing entirely. Wont even be the same team at all, is most likely.
So nothing about their prior ability, or lack thereof, to make Siri smart means anything about their ability to execute if they add a large LLM in there.