RAG is a tool for the deep research agent to use in finding all of the context it needs. Deep research can call the search many times and reflect on the results of the previous searches and then search for other things as needed. Deep research flows can also generate chain-of-thought type outputs that is not for searching or for directly answering the user.
Yup, hopefully with Onyx, the folks who have these questions can just fire off a query with agent mode turned on and the LLM will research the relevant tree of knowledge and come back with an answer in the fraction of the time it would take for people to do it with all of the handoffs in between.
Yes. I suspect it won't, right now, be as good a response but where time or cost matters an 80% effective response in 10% of the time it would have taken or say 5% of cost (if you were to 'dollarize' worker effort) would present useful options to those asking.
We have a dataset that we use internally to evaluate our search quality. It's more representative of our use case since it contains Slack messages, call transcripts, very technical design docs, company policies which is pretty different from what embedding models are typically trained on.
We checked the recall at 4K tokens (which was a pretty typical token limit of the previous generation of LLMs) and we were at over 94% recall for our 10K document set. We also added a lot of noise to it (Slack messages from public Slack workspaces) to get hundreds of thousands of documents but recall remained at over 90%.
Assuming self-hosting, data is processed within the deployment with local deep learning models for embedding, identifying low information documents, etc. A hybrid keyword/vector index is built locally within the deployment as well.
At rest, the data is stored in Postgres and Vespa (the hybrid index), both of which are part of the deployment so it's all local.
The part that typically goes external is the LLM but many teams also host local LLMs to use with Onyx. In either case, the LLM is not being finetuned, the knowledge relevant to the question is passed in as part of the user message.
We built Onyx with data security in mind so we're very proud of the way the data flows within the system. We made the system work well with models that can run without GPUs as well so our users can get good quality results even if deploying on a laptop.
This is a large challenge in itself actually. Every external tool has it's own framework for permissions (necessarily so).
For example, Google Drive docs have permissions like "global public", "domain public", "private" where "private" is shared with users and groups and there's also the document owner.
Slack has public channels, private channels, DMs, group DMs.
So we need to map these external objects and their external users/groups into a unified representation within Onyx.
Then there are additional challenges like rate limiting so we cannot poll at subsecond intervals.
The way that we do it is we have async jobs that check for object permission updates and group/user updates against the external sources at a configurable frequency (with defaults that depend on the external source type).
Of course, always failing-closed instead of failing-open and defaulting to least permissive.
Quite a lot to cover here! So in addition to the typical RAG pipeline, we have many other signals like learning from user feedback, time based weighting, metadata handling, weighting between title/content, and different custom deep learning models that run at inference and indexing time all to help the retrieval. But this is all part of the RAG component.
The agent part is the loop of running the LLM over RAG system and letting it decide which questions it wants to explore more (some similarities to retry|refuse|respond I guess?). We also have the model do CoT over its own results including over the subquestions it generates.
Essentially it is the deep research paradigm with some more parallelism and a document index backing it.
How does the agent traverse the information: there are index-free approaches where the LLM has to use the searches of the tools. This gives worse results than approaches that build a coherent index across sources. We use the latter approach. So the search occurs over our index which is a central place for all the knowledge across all connected tools.
Do you have any internal evals on how well the different model affect the overall quality of output, esp for a "deep search" type of task? I have model-picker fatigue: Yes, we have datasets that we use internally. It comprises of "company type data" rather than "web type" data (like short Slack messages, very technical design documents, etc.) comprising about 10K documents and 500 questions.
For which model to use: it was developed primarily against gpt-4o but we retuned the prompts to work with all the recent models like Claude 3.5, Gemini, Deepseek, etc.
Do you plan to implement knowledge graphs in the future? Yes! We're looking into customizing LLM based knowledge graphs like LightGraphRAG (inspired by, but not the same).
Do you think this indexing architecture would bring benefits to general web research? If implemented like: planner, searches, index webpages in chunks, search in index, response
Would you ever extend your app to search the web or specialized databases for law, finance, science etc?
On privacy and security, we are the only option (as far as I know) that you can connect up to all your company internal docs and have it be all processed locally to the deployment and stored at rest within the deployment.
So basically you can have it completely airgapped from the outside world, the only tough part is the local LLM but there are lots of options for that these days.
Before sharing how it works, I want to highlight some of the challenges of a system like this. Unlike deep research over the internet, LLMs aren’t able to easily leverage the built in searches of these SaaS applications. They each have different ways of searching for things, many do not have strong search capabilities, or they rely on their internal query language. There are also a ton of other signals that web search engines use that aren’t available natively in the tools. Examples include: backlinks, clickthrough rates, etc. Additionally a lot of teams rely on internal terminology that is unique to them and hard for the LLM to search for. There’s also the challenge of unifying the objects across all of the apps into a plaintext representation that works for the LLM.
The best way we’ve found to do this is to build a document index instead of relying on application native searches at query time. The document index is a hybrid index of keyword frequencies and vectors. The keyword component addresses issues like team specific terminology and the vector component allows for natural language queries and non-exact matching. Since all of the documents across the sources are processed prior to query time, inference is fast and all of the documents have already been mapped to an LLM friendly representation.
There are also other signals that we can take into account which are applied across all of the sources. For example, the time that a document was last updated is used to prioritize more recent documents. We also have models that run at indexing time to label documents and models that run at inference time to dynamically change the weights of the search function parameters.
When you talked about "document index is a hybrid index of keyword frequencies and vectors", I am a bit curious of how to get them. In pre-processing, do you have to use LLM / models go through documents to get keywords? What about vectors? Are you using embed model to generate them? Does that imply preprocess has to be done whenever there is new doc or any modification in existing doc? Would that be costly in time? Any spicy cook to make the preprocessing more efficient?