wow you just made the connection for me. GPT2 was too dangerous to release, and now GPT3 is so much better - is there no point at which things become too dangerous anymore? what was the conclusion on that one?
> What specifically will OpenAI do about misuse of the API, given what you’ve previously said about GPT-2?
> We will terminate API access for use-cases that cause physical or mental harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam; as we gain more experience operating the API in practice we expect to expand and refine these categories.
With Amazon having a moratorium of their rekognition API, I wonder if a Cambridge Analytica type event could happen to OpenAI where someone abuses and escapes the terms of service.
hmm i dont love this. either OpenAI has implicitly promised to monitor all its users, or has adopted a "report TOS violations to us when they happen and we will judge" stance. neither are great roads to go down.