there is a somewhat unfiltered GPT4 at Azure, but they really don't want anybody's money (afaik only "trusted" corporate entities can access it)
at this time, your only option is local models. if you don't have the hardware to run them yourself, there are plenty of hosts - poe/perplexity/together etc.
llama3 is (hopefully) coming soon, and if it has improved as much as llama2 improved over llama1, and provides at least 16k baseline context size, it will be in between gpt3.5 and gpt4 in terms of quality, which is mostly enough.
Yes, there are 2 issues for me, my hardware is not powerful enough, only 8Gb vram and the models are still not intelligent enough. At this moment I have opened tabs for different websites and when I ahve a question I compare them and see the state. I would like a model that would say I do not know more often then respond with the wrong thing, also I would like it to follow instructions , now I ask them to "rewrite previous response but without X" and they respond "sure, here is the response without X " and they do not followed the instruction, like they are "hard coded" to do X. An example for X is "do not add a summary or conclusion"
at this time, your only option is local models. if you don't have the hardware to run them yourself, there are plenty of hosts - poe/perplexity/together etc.
llama3 is (hopefully) coming soon, and if it has improved as much as llama2 improved over llama1, and provides at least 16k baseline context size, it will be in between gpt3.5 and gpt4 in terms of quality, which is mostly enough.