This type of complete garbage is not uncommon in AI. It's simply the nature of asking a non-intelligent system to generate human readable content.
Maybe this is a different way to think about it. In most of the country, your cellphone has _amazing_ coverage. It can talk clearly with a cell tower. Your data and calls work perfectly.
In some parts of the country, you're going to have no service. Your cell phone won't work. It doesn't have cell towers to talk to.
At the intersection of service and no-service, you'll find an area where your cell phone works sporadically. It might barely have 1 bar of service. You might need to hold your phone a certain way. It will work seemingly randomly. Calls might have a few words go through.
That edge of service is essentially where the LLM is at. Its in an internal state where it has enough signal to attempt to generate a response, but not a strong enough signal to generate a meaningful response. It ends up falling back to something it's "memorized".
I find your example surreal as well... I get the surreal feel from these kinds of technological liminal spaces where hard and fast rules break down into seemingly black magic.
"You might be able to get cell service by holding your phone differently. Try waving it randomly around the room, one corner might work better than others."
"The USB stick enters on the third try."
"An iterative semi-deterministic bag of matrix multiplications can convincingly communicate. Undefined behavior appears schizophrenic."
On an intellectual level, I get it, but it's still fuckin' weird.