Not virtue signaling, but I seem to be asking a different question than people are answering. I’m asking what the product opportunity is and people keep telling me examples of tasks that they use it for.
In many cases the examples are one-off and the only product opportunity is the generative model interface itself. Looking broadly over the replies what I’m seeing is “there’s a thing that used to fail the cost/benefit test, but now the cost is so low that I can automate these things”. So part of my problem is (1) the small benefits of these tasks mean the value proposition comes from volume—that probably comes from the generality of the task engine, and (2) there may be some niche product opportunities on top of the model platform, but the primary big winner here is the platform itself. (That’s not necessarily a new insight, but it seems especially true here.)
The terrifying part is how often I hear people in this thread and elsewhere mentioning tasks that are not fault tolerant to the failure modes of these models. (For example, I had a coworker tell me their relative is a doctor using ChatGPT to diagnose patients.) People keep focusing on the risks of AGI killing us all with paperclips, but I’m much more worried about getting run over by some idiot asking ChatGPT to drive their car.
> Not virtue signaling, but I seem to be asking a different question than people are answering. I’m asking what the product opportunity is and people keep telling me examples of tasks that they use it for.
Sorry, I misunderstood. "I am not using AI" has become a sort of badge of honor in certain communities so I was wondering if that's what it was.
> (2) there may be some niche product opportunities on top of the model platform, but the primary big winner here is the platform itself. (That’s not necessarily a new insight, but it seems especially true here.)
I agree with that conclusion. I think the chat interface is the killer product. I treat ChatGPT as an assistant/intern that is really good at some tasks but that can also sometimes make dumb mistakes. It has also replaced a lot of queries I would have previously done on Google or questions I might have asked somewhere (e.g. in a forum, Reddit, Discord, etc.).
Many startups build domain specific UIs on top of it using the API, but whether that will become a sustainable business model remains to be seen[0]. I am reminded of the many "vertical" search engines that were once trying to compete with Google.
[0] Saying this as someone who did something like that: https://eli5.gg
In many cases the examples are one-off and the only product opportunity is the generative model interface itself. Looking broadly over the replies what I’m seeing is “there’s a thing that used to fail the cost/benefit test, but now the cost is so low that I can automate these things”. So part of my problem is (1) the small benefits of these tasks mean the value proposition comes from volume—that probably comes from the generality of the task engine, and (2) there may be some niche product opportunities on top of the model platform, but the primary big winner here is the platform itself. (That’s not necessarily a new insight, but it seems especially true here.)
The terrifying part is how often I hear people in this thread and elsewhere mentioning tasks that are not fault tolerant to the failure modes of these models. (For example, I had a coworker tell me their relative is a doctor using ChatGPT to diagnose patients.) People keep focusing on the risks of AGI killing us all with paperclips, but I’m much more worried about getting run over by some idiot asking ChatGPT to drive their car.