What LLM are you currently using? And how do you prevent against hallucinations/drift when generating content of this length? Do you start by asking the model to generate an outline of a book, and then have it expand on each chapter with more detail? Awesome project
Currently using GTP-4o combined with Perplexity API context infusion for real-time knowledge (this also reduces hallucination for the most part). And yes, starting with outline and then writing chapter by chapter while doing real time research. After initial completion there’s another editing round to make everything more coherent.