> The business problem OpenAI faces is that their systems aren't good enough that users can trust the results.
It depends what you're doing with it. Answering arbitrary questions is very different from document summarization or from mapping arbitrary questions onto a list of canned questions to then answer.
It also doesn't have to be perfect. Instead the cost of an overall system (AI + error reporting & handling) just has to be cheaper than that same overall system using whatever the AI is replacing.
If it makes errors 3% of the time, and people from Mechanical Turk make errors 2% of the time, its usefulness depends on whether or not those 50% more errors cost more than the cost savings from paying a model provider rather than paying humans.
It depends what you're doing with it. Answering arbitrary questions is very different from document summarization or from mapping arbitrary questions onto a list of canned questions to then answer.
It also doesn't have to be perfect. Instead the cost of an overall system (AI + error reporting & handling) just has to be cheaper than that same overall system using whatever the AI is replacing.
If it makes errors 3% of the time, and people from Mechanical Turk make errors 2% of the time, its usefulness depends on whether or not those 50% more errors cost more than the cost savings from paying a model provider rather than paying humans.