IMO effective guard rails seem like the most meaningful competitive advantage an AI company can offer. AI can obviously do some really impressive stuff, but the downside risk is also high and unbounded. If you're thinking of putting in into your pipeline, your main concern is going to be it going rogue and abandoning its purpose without warning.
Now that's not to say that the particular guard rails OpenAI puts in their general access models are the "correct" ones - but being able to reliably set them up seems essential for commercialization.
> IMO effective guard rails seem like the most meaningful competitive advantage an AI company can offer.
Configurable guard rails are; the right guard rails are very use-specific, and generic guard rails will, for many real uses, be simultaneously too aggressive and too lenient.
I totally agree that generic guard rails are more difficult - but it feels like a "turtles all the way down" kind of situation. You need to learn to tell the model how to be "specific" - which requires shaping general behavior.
OpenAI can prove to customers they can keep the model in line for their specific use case if no horror stories emerge for the generic one. It's always possible that partners could come up with effective specific guidelines for their use case - but that's probably in the domain of trade secrets so OpenAI can't really rely on that for marketing / proof.
IMO effective guard rails seem like the most meaningful competitive advantage an AI company can offer. AI can obviously do some really impressive stuff, but the downside risk is also high and unbounded. If you're thinking of putting in into your pipeline, your main concern is going to be it going rogue and abandoning its purpose without warning.
Now that's not to say that the particular guard rails OpenAI puts in their general access models are the "correct" ones - but being able to reliably set them up seems essential for commercialization.