I believe this is utter fantasy. That kind of data is usually super messy. LLMs are terrible at disambiguating if something is useful vs harmful information.
It's also unlikely that context windows will become unbounded to the point where all that data can fit in context, and even if it can it's another question entirely whether the model can actually utilize all the information.
Many, many unknown unknowns would need to be overcome for this to even be in the realm of possibility. Right now it's difficult enough to get simple agents with relatively small context to be reliable and perform well, let alone something like what you're suggesting.
That's not the goal of LLMs. CEOs and high-level executives need people beneath them to handle ambiguous or non-explicit commands and take ownership of their actions from conception to release. Sure, LLMs can be configured to handle vague instructions and even say, "sure, boss, I take responsibility for my actions," but no real boss would be comfortable with that.
Think about it: if, in 10 years, I create a company and my only employee is a highly capable LLM that can execute any command I give, who's going to be liable if something goes wrong? The LLM or me? It's gonna be me, so I better give the damm LLM explicit and non-ambiguous commands... but hey I'm only the CEO of my own company, I don't know how to do that (otherwise, I would be an engineer).
I’d definitely be interested in at least giving a shot at working for a company CEO’d by a LLM… maybe, 3 years from now.
I don’t know if I really believe that it would be better than a human in every domain. But it definitely won’t have a cousin on the board of a competitor, reveal our plans to golfing buddies, make promotions based on handshake strength, or get canceled for hitting on employees.
But it will change its business plan the first time someone says "No, that doesn't make sense", and then it'll forget what either plan was after a half hour.
To be CEO is to have opinions and convictions, even if they are incorrect. That's beyond LLMs.
Minor tangential quibble: I think it is more accurate to say that to be human is to have opinions and convictions. But, maybe being CEO is a job that really requires turning certain types of opinions and convictions into actions.
More to the point, I was under the impression that current super-subservient LLMs were just a result of the fine-tuning process. Of course, the LLM doesn’t have an internal mental state so we can’t say it has an opinion. But, it could be fine-tuned to act like it does, right?
That was my point - to be CEO is to have convictions that you're willing to bet a whole company upon.
Who is fine-tuning the LLM? If you're having someone turns the dials and setting core concepts and policies so that they persist outside the context window it seems to me that they're the actual leader.
Generally the companies that sell these LLMs as a service do things like fine-tuning and designing built-in parts of the prompt. If we want to say we consider the employees of those companies to be the ones actually doing <the thing>, I could be convinced, I think. But, I think it is an unusual interpretation, usually we consider the one doing <the thing> to be the person using the LLM.
I’m speculating about a company run by an LLM (which doesn’t exist yet), so it seems plausible enough that all of the employees of the company could use it together (why not?).
Yeah, or maybe even a structure that is like a collection of co-ops, guilds, and/or franchises somehow coordinated by an LLM. The mechanism for actually running the thing semi-democratically would definitely need to be worked out!