Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

wow you just made the connection for me. GPT2 was too dangerous to release, and now GPT3 is so much better - is there no point at which things become too dangerous anymore? what was the conclusion on that one?


The blog post directly addresses this question: https://openai.com/blog/openai-api/

> What specifically will OpenAI do about misuse of the API, given what you’ve previously said about GPT-2?

> We will terminate API access for use-cases that cause physical or mental harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam; as we gain more experience operating the API in practice we expect to expand and refine these categories.


With Amazon having a moratorium of their rekognition API, I wonder if a Cambridge Analytica type event could happen to OpenAI where someone abuses and escapes the terms of service.


ah i've been caught not reading the linked post

hmm i dont love this. either OpenAI has implicitly promised to monitor all its users, or has adopted a "report TOS violations to us when they happen and we will judge" stance. neither are great roads to go down.


GPT2 being "too dangerous to release" was a marketing stunt from the very beginning.


Who are you quoting here?





Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: