I enjoy Simon's writing, but respectfully I think he missed the mark on this. I do have some biases I bring to the argument: I have been working mostly in deep learning for a number of years, mostly in NLP. I gave OpenAI my credit card for API access a while ago for GPT-3 and I find it often valuable in my work.
First, and most importantly: Microsoft is a business. They own a just small part of the search business that Google dominates. With ChatGPT+Bing they accomplish quite a lot: good chance of getting a bit more share of the search market; they will cost a competitor (Google) a lot of money and maybe force Google into an Innovator's Dilemma situation; they are getting fantastic publicity; they showed engineering cleverness in working around some of ChatGPT's shortcomings.
I have been using ChatGPT+Bing exclusively for the last day as my search engine and I like it for a few reasons:
1. ChatGPT is best when you give it context text, and a question. ChatGPT+Bing shows you some of the realtime web searches it makes to get this context text and then uses ChatGPT in a practical way, not just trying to trip it up to write an article :-)
2. I feel like it saves me time even when I follow the reference links it provides.
3. It is fun and I find myself asking it questions on a startup idea I have, and other things I would not have thought to ask a search engine.
I think that ChatGPT+Bing is just first baby steps in the direction that probably most human/computer interaction will evolve to.
There is an old AI joke about a robot, after being told that it should go to the Moon, that it climbs the tree, sees that it has made the first baby steps towards being closer to the goal, and then gets stuck.
The way that people who are trying to use ChatGPT is certainly an example of what humans _hope_ the future of human/computer interaction should be. Whether or not Large Language Models such as ChatGPT is the path forward is yet to be seen. Personally, I think that model of "every-increasing neural network sizes" is a dead-end. What is needed is better semantic understanding --- that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words. We don't know how to do this today; all we know how to do is to make the neural networks larger and larger.
What we need is a way to have networks of networks, and creating networks which can handle memory, and time sense, and reasoning, such that the network of networks has pre-defined structures for these various skills, and ways of training these sub-networks. This is all something that organic brains have, but which neural networks today do not..
> What is needed is better semantic understanding --- that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words. We don't know how to do this today; all we know how to do is to make the neural networks larger and larger.
It's pretty clear that these LLMs basically can already do this - I mean they can solve the exact same tasks in a different language, mapping from the concept space they've been trained on english in to other languages. It seems like you are awaiting a time where we explicitly create a concept space with operations performed on it, this will never happen.
> that is, mapping words to abstract concepts, operating on those concepts, and then translating concepts back into words
I feel like DNNs do this today. At higher levels of the network they create abstractions and then the eventual output maps them to something. What you describe seems evolutionary, rather than revolutionary to me. This feels more like we finally discovered booster rockets, but still aren't able to fully get out of the atmosphere.
They might have their own semantics, but its not our semantics! The written word already can only approximate our human experience, and now this is an approximation of an approximation. Perhaps if we were born as writing animals instead of talking ones...
This is true, but I think this feels evolutionary. We need to train models using all of the inputs that we have... touch, sight, sound, smell. But I think if we did do that, they'd be eerily close to us.
First, and most importantly: Microsoft is a business. They own a just small part of the search business that Google dominates. With ChatGPT+Bing they accomplish quite a lot: good chance of getting a bit more share of the search market; they will cost a competitor (Google) a lot of money and maybe force Google into an Innovator's Dilemma situation; they are getting fantastic publicity; they showed engineering cleverness in working around some of ChatGPT's shortcomings.
I have been using ChatGPT+Bing exclusively for the last day as my search engine and I like it for a few reasons:
1. ChatGPT is best when you give it context text, and a question. ChatGPT+Bing shows you some of the realtime web searches it makes to get this context text and then uses ChatGPT in a practical way, not just trying to trip it up to write an article :-)
2. I feel like it saves me time even when I follow the reference links it provides.
3. It is fun and I find myself asking it questions on a startup idea I have, and other things I would not have thought to ask a search engine.
I think that ChatGPT+Bing is just first baby steps in the direction that probably most human/computer interaction will evolve to.