Personally my theory is that Gemini benefits from being able to train on Googles massive internal code base and because Rust has been very low on uptake internally at Google, especially since they have some really nice C++ tooling, Gemini is comparatively bad at Rust.
Tangental, but I worry that LLMs will cause a great stagnation in programming language evolution, and possibly a bunch of tech.
I've tried using a few new languages and the LLMs would all swap the code for syntactically similar languages, even after telling them to read the doc pages.
Whether that's for better or worse I don't know, but it does feel like new languages are genuinely solving hard problems as their raison d'etre.
Not just that, I think this will happen on multiple levels too. Think de-facto ossified libraries, tools, etc.
LLMs thrive because they had a wealth of high-quality corpus in the form os Stack Overflow, Github, etc. and ironically their uptake is causing a strangulation of that source of training data.
Perhaps the next big programming language will be designed specifically for LLM friendliness. Some things which are human friendly like long keywords are just a waste of tokens for LLMs, and there could be other optimisations too.
>Personally my theory is that Gemini benefits from being able to train on Googles massive internal code base and because Rust has been very low on uptake internally at Google, especially since they have some really nice C++ tooling, Gemini is comparatively bad at Rust.
Were they to train it on their C++ codebase, it would not be effective on account of the fact that they don't use boost or cmake or any major stuff that C++ in the wider world use. It would also suggest that the user make use of all kinds of non-available C++ libraries. So no, they are not training on their own C++ corpus nor would it be particularly useful.
> Personally my theory is that Gemini benefits from being able to train on Googles massive internal code base
But does Google actually train its models on its internal codebase? Considering that there’s always the risk of the models leaking proprietary information and security architecture details, I hardly believe they would run that risk.
That's interesting. I've tried Gemini 2.5 Pro from time to time because of the rave reviews I've seen, on C# + Unity code, and I've always been disappointed (compared to ChatGPT o3 and o4-high-mini and even Grok). This would support that theory.
As go feels like a straight-jacket compared to many other popular languages, it’s probably very suitable for an LLM in general.
Thinking about it - was this not the idea of go from the start? Nothing fancy to keep non-rocket scientist away from foot-guns, and have everyone produce code that everyone else can understand.
Diving in to a go project you almost always know what to expect, which is a great thing for a business.