Thanks for sharing, this is interesting. I'm curious how scraping a website for a GPT knowledge file makes the GPT "better" than the standard model? Can it not answer the same questions, assuming these pages were already part of the training of the standard model, or by leveraging the browsing feature to extract the same knowledge on the fly?
In our case, our use case was we wanted a chat interface into our docs. We provide dour docs, forum, and open source examples as the knowledge from the GPT, via this crawler. Base GPT isn't that knowledgable about our product, but with this approach its significantly more capable of answering detailed questions about how to integrate Builder.io or deal with errors