Not to troubleshoot but unless you visually inspected the context that was provided to the model it is quite possible it never even had your change pulled in.
Lots of front ends will do tricks like partially loading the file or using a cached version or some other behavior. Plus if you presented the file to the same “thread” it is possible it got confused about which to look at.
These front ends do a pretty lousy job of communicating to you, the end user, precisely what they are pulling into the models context window at any given time. And what the model sees as its full context window might change during the conversation as the “front end” makes edits to part portions of the same session (like dropping large files it pulled in earlier that it determines aren’t relevant somehow).
In short what you see might not be what the model is seeing at all, thus it not returning the results you expect. Every front end plays games with the context it provides to the model in order to reduce token counts and improve model performance (however “performance gets defined and measured by the designers)
That all being said it’s also completely possible it missed the gorilla in the middle… so who really knows eh?
Lots of front ends will do tricks like partially loading the file or using a cached version or some other behavior. Plus if you presented the file to the same “thread” it is possible it got confused about which to look at.
These front ends do a pretty lousy job of communicating to you, the end user, precisely what they are pulling into the models context window at any given time. And what the model sees as its full context window might change during the conversation as the “front end” makes edits to part portions of the same session (like dropping large files it pulled in earlier that it determines aren’t relevant somehow).
In short what you see might not be what the model is seeing at all, thus it not returning the results you expect. Every front end plays games with the context it provides to the model in order to reduce token counts and improve model performance (however “performance gets defined and measured by the designers)
That all being said it’s also completely possible it missed the gorilla in the middle… so who really knows eh?