Isn't an apology a bad metric for evaluating models?
Without understanding much, it seems to be more an indication of the type of content the model was trained on, rather than an indicator of how good or bad a model is, or how much it knows. It would probably be easy to create bad model that constantly outputs wrong information, but always apologizes when corrected.
A model changing its opinion on the first request may sound more flattering to you, but is much less trustworthy for anybody sane. With a more stubborn model I at I have to worry less that I give off what I think about a subject via subtle phrasing. Other than that, it's hard to say anything about your scenario without more information. Maybe it gave you the right information and you failed to understand it, maybe it was wrong and then it's no big news, because LLMs are not this magic thing that always gives you right answers, you know.
Usually models just apologize after I insist 2 or 3 times.
So it was the shortest LLM I tried, I honestly can't trust such models for anything.