I don't think this reflects any gaps between MS and OpenAI capabilities, I speculate the differences could be because of the following issues:
1. Despite it's ability, ChatGPT was heavily policed and restricted - it was a closed model in a simple interface with no access to internet or doing real-time search.
2. GPT in Bing is arguably a much better product in terms of features - more features meaning more potential issues.
3. Despite a lot more features, I speculate the Bing team didn't get enough time to polish the issues, partly because of their attempt to win the race to be the first one out there (which imo is totally valid concern, Bing can never get another chance at a good share in search if they release a similar product after Google). '
4. I speculate that the model Bing is using is different from what was powering ChatGPT. Difference here could be a model train on different data, a smaller model to make it easy to scale up, a lot of caching, etc.
TL;DR: I highly doubt it is a cultural issue. You notice the difference because Bing is trying to offer a much more feature-rich product, didn't get enough time to refine it, and trying to get to a bigger scale than ChatGPT while also sustaining the growth without burning through compute budget.
Bing AI is taking in much more context data, IIUC. ChatGPT was prepared by fine-tune training and an engineered prompt, and then only had to interact with the user. Bing AI, I believe, is taking the text of several webpages (or at least summarised extracts of them) as additional context, which themselves probably amount to more input than a user would usually give it and is essentially uncontrolled. It may just be that their influence over its behaviour is reduced because their input accounts for less of the bot's context.
Also by the time ChatGPT really broke through in public consciousness it had already had a lot of people who had been interacting with its web API providing good RL-HF training.
1. Despite it's ability, ChatGPT was heavily policed and restricted - it was a closed model in a simple interface with no access to internet or doing real-time search.
2. GPT in Bing is arguably a much better product in terms of features - more features meaning more potential issues.
3. Despite a lot more features, I speculate the Bing team didn't get enough time to polish the issues, partly because of their attempt to win the race to be the first one out there (which imo is totally valid concern, Bing can never get another chance at a good share in search if they release a similar product after Google). '
4. I speculate that the model Bing is using is different from what was powering ChatGPT. Difference here could be a model train on different data, a smaller model to make it easy to scale up, a lot of caching, etc.
TL;DR: I highly doubt it is a cultural issue. You notice the difference because Bing is trying to offer a much more feature-rich product, didn't get enough time to refine it, and trying to get to a bigger scale than ChatGPT while also sustaining the growth without burning through compute budget.