Hacker News new | past | comments | ask | show | jobs | submit login

All 3 text-davinci models are available on openAI's api. including 3 (which is the GPT-3.5 gen). Code-davinci-002 is a code-tuned model, You can see a nice visual summary of the relationships between the openAI models at https://yaofu.notion.site/How-does-GPT-Obtain-its-Ability-Tr...

Or the official source is https://platform.openai.com/docs/model-index-for-researchers




> All 3 text-davinci models are available on openAI's api.

That's irrelevant because these are all fine-tuned.

> Code-davinci-002 is a code-tuned model

No, "code-tuned" isn't even a thing. It is a foundation model, which consists purely of pretreating. No fine-tuning is involved.

> Or the official source is

The official source says exactly what I just said.


OK perhaps I used slightly the wrong term. The docs[1] say that code-davinci-002 is "optimized for code completion tasks" though so it seems unlikely to fulfil the OPs purpose of playing around with an unaligned/sweary model which was my main point. Some of the uncensored models from huggingface would probably serve that purpose much better.

[1] see the entry for code-davinci-002 in https://platform.openai.com/docs/models/gpt-3-5


Code was just part of its pretraining. All other GPT-3.5 models are fine-tuned versions of code-davinci-002.

Quote:

1 code-davinci-002 is a base model, so good for pure code-completion tasks

2 text-davinci-002 is an InstructGPT model based on code-davinci-002

3 text-davinci-003 is an improvement on text-davinci-002

4 gpt-3.5-turbo-0301 is an improvement on text-davinci-003, optimized for chat

Quote end.

https://platform.openai.com/docs/model-index-for-researchers

The reason you want a base model for code completion has nothing to do with code itself, it has to do with the fact that it completes text unlike all the instruction tuned models, which expect instructions. When you have code, there aren't necessarily any instructions present. You basically want autocomplete. That's what a base model does. But that doesn't mean it doesn't work with other things apart from code. After all, all other GPT-3.5 models are just code-davinci-002 with additional instruction and RLHF fine-tuning added, and they know countless other subject areas apart from code.

I don't get why this is so hard to understand.


It's not hard to understand. We just have a disagreement about something that you think is very important probably partly because you know more about this than I do. Have a nice day. Thanks for explaining.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: