The most limiting factor I’ve come across is hitting the context window. Eventually your new eager employee starts to forget what you’ve taught them but they’re too confident to admit it.
Are there methods to "summarize what they've learned" and then replace the context window with the shorter version? This seems like pretty much what we do as humans anyway... we need to encode our experiences into stories to make any sense of them. A story is a compression and symbolization of the raw data one experiences.
Yeah that's a fairly well studied one. Most of these techniques are rather "lossy" compared to extending the context window. The most likely "real solution" is going to be using various tricks and finetuning on higher context lengths to just extend the context window.
Yes! The obvious answer is to just increase your positions and train for that. This requires a ton of memory however (context length is squared) so most are currently training at 4k/8k and then finetuning higher similar to many of the image models.
However there's been some work that to "get extra milage" out of the current models so-to speak with rotary positions and a few other tricks. These in combination with finetuning is the current method many are using at the moment IIRC.
The bottleneck is quickly going to be inference. Since the current transformer models need the context length ^2, the memory requirements go up very quickly. IIRC a 4090 can _barely_ fit a 4bit 30B model in memory with 4096k context length.
From my understanding some form of RNNs are likely to be the next step for longer context. See RWKV as an example of a decent RNN https://arxiv.org/abs/2305.13048
I’ve absolutely explored this idea but, similar to lossy compression, sometimes important nuance is lost in the process. There is both an art and science to recalling the gently compacted information and being able to recognize when it needs to be repeated back.
If there was something like Objects in OO programming, but for LLM’s, would that solve this?
Like a Topic-based Personality Construct where the model first determines which of its “selves” should answer the question, and then grabs appropriate context given the situation.
The animal brain equivalent isn't summarize a context window to account for limited working memory. It's to never leave training mode to go into inference-only mode. The learned models in animal brains never stop learning.
There is nothing stopping someone from keeping an LLM in online-training mode forever. We don't do that because it's economically infeasible, not because it wouldn't work.
Putting too much information in the context window is counter-productive in my experience. Low signal/noise ratio tends to increate the likelihood of model hallucinations, and we don't want that!
What works in my experience - structuring the task similar to a human-driven workflow, breaking it down into small steps is needed. Each step could be driven by a small prompt, relevant document fragments (if RAG is used) and condensed essays/tutorials/guides that were written by a powerful LLM (ideally, GPT-4 pre-Turbo).
Using this approach, you could stay well below 8k token limit even on the most demanding tasks.
What about some generation-augmented retrieval augmented generation set-up where all your conversations are indexed for regular text search, and then you use the LLMs language knowledge to generate relevant search phrases the results of which are included in the current prompt?
Yeah, especially with a large knowledge base I find it important to keep a log of prompts/responses and perform team reviews of both. It’s honestly making more work than it’s saving at the moment with the hope that it’ll be more helpful down the road. On the plus side it’s made the team more interested in tasks around technical documentation and marketing material, so still a win!
Heh, funny you say that. Schemaverse was built for my masters thesis which explored if the application layer and data layer could be successfully merged in a way that improved data consistency and integrity, and thus arguably improved security. I'll save you the 90 page read but short answer yes, but scaling becomes an absolute nightmare.
The very tiny server is getting a bit crushed at the moment. If you actually want to try it out, check the tutorial tomorrow when the traffic goes back to normal.
I encountered the same error message, yet it seemed to be from a broken link instead. From the homepage [0] while signed in, I clicked on "How to play" right under the query entry box. It sent me to this page, which is a different url from the one you linked: https://wiki.github.com/Abstrct/Schemaverse/how-to-play
Heh, ok did not expect to see my old project sitting on the front page <3 The server is getting gently hugged to death atm but I'll try to keep it responding.
The same guy pretty much wins every year with minor modifications to his code (okay, except the last two years with the whole covid thing). That said, he's an extremely cool guy and very eager to train his potential competition. Please please please take him up on the offer.
I suggest to anyone to show up to the Defcon Is Cancelled party that he co-organizes.
Heads up, you're still pointing to Freenode for the IRC channel. Possibly you'd want to change that to point at Libera.Chat. The channel already exists at the latter.
That's awesome! I'm glad to hear it helped. I was personally so sick of every database course I took using the same `department` and `employee` tables, I wanted something fun instead.
This is a pretty smart purchase, with it MasterCard is getting a fairly mature technology stack (by cryptocurrency standards at least), an incredible amount of data (mostly attribution data relating to addresses and transactions), and a team that’s experienced with the industry (which there is a massive void in skilled workers for).