I just had it convert Swift code to Kotlin and was surprised at how the comment was translated.
It "knew" the author of the paper and what is was doing!? That is wild.
Swift:
//
// Double Reflection Algorithm from Table I (page 7)
// in Section 4 of https://tinyurl.com/yft2674p
//
for i in 1 ..< N {
let X1 = spine[i]
...
Kotlin:
// Use the Double Reflection Algorithm (from Wang et al.) to compute subsequent frames.
for (i in 1 until N) {
val X1 = Vector3f(spine[i])
...
Wild that it can do that, but also clearly a worse output. The original has a URL, section, page, and table listed. The AI version instead cites the author. Needing to notice and fix unhelpful tweaks is one of the burdens of LLMs.
Well, of course it knew the author. I'm sure you can ask just about any LLM who the author of the DRA is and it will answer Wang et al. without even having to google or follow the tinyurl link. And certainly it would also know that the algorithm is supposed to compute rotation minimizing frames.
Not sarcastic at all. it just doesn't seem like a big deal if you have played with LLMs and realize just how much LLMs know. The double reflection paper is not particularly obscure. (Incidentally I just asked Claude a couple of weeks ago about implementing rotation-minimizing frames!)
Someone else has written this exact code on the internet, OpenAI stole it, and now chatgpt is regurgitating it. Just like it can regurgitate whole articles.
You need to stop being wow'd by human intelligence masquerading as AI!
Swift:
Kotlin: