Genuine question, how does one get over the 30k token limit in a gpt4 request in such cases?
From my personal experience, it seems while today we can create PoCs of GPT4 doing a white colar job replacement, when you then try to actually productize it you still hit a data encoding limit...
And that doesn't even touch the QoS topics that come with automation/ml.
From my personal experience, it seems while today we can create PoCs of GPT4 doing a white colar job replacement, when you then try to actually productize it you still hit a data encoding limit...
And that doesn't even touch the QoS topics that come with automation/ml.