This is, craaaaaazzzzzy. I'm just a layman, but to me, this is the most compelling evidence that things are starting to tilt toward AGI that I've ever seen.
You’re anthropomorphizing it. Years ago people were trying to argue that when GPT-3.0 would repeat words in a loop it was being poetic. No, it’s just a statistical failure mode.
When these new models go off to a random site and are caught in a loop of exploring pages that doesn’t mean it’s an AGI admiring nature.
This is clearly not random. If I ask to implement a particular function in Rust using a library I've previously built, and it does that, that's not random.
Why are you surprised by LLMs doing irrational or weird things?
All machine learning models start off in a random state. As they progress through their training, their input/output pairs tend to mimic what they've been trained to mimic.
LLMs have been doing a great job mimicking our human flaws from the beginning because we train them on a ton of human generated data. Other weird behavior can be easily attributed to simple fact that they're initialized at a random state.
Being able to work on and prove non-trivial theorems is a better indication of AGI, IMO.