How did it even make sense in your head to compare an OLTP workload to a chat service?
The proxy servers are there to terminate the large number of persistent connections. Of course it's possible to do it using less servers, but given the Hipchat guys are smart (disclosure - I'm an Atlassian and know the internals) I would give them the benefit of the doubt rather than engaging in armchair architecture.
My guess: the 60 messages per second is an average. Monday morning in the US timezones might see several multiples of that. Weekends could be a lot lower.
It's definitely very bursty. On weekends and holidays things are much more quiet. During peak load we'll be in the hundreds/sec. Also keep in mind that chat messages don't actually make up the majority of the traffic we serve; it's presence information (away, idle, available), people connecting/disconnecting, etc. (I'm one of the HipChat co-founders.)
And? The article makes no mention of what each server is responsible for, how much redundancy is built in (this is AWS where nodes die on a whim), whether all of these servers are used for production or if there is a spread across dev and staging environments, etc.
When all of these factors are considered, 52 servers isn't a huge number.
The proxy servers are there to terminate the large number of persistent connections. Of course it's possible to do it using less servers, but given the Hipchat guys are smart (disclosure - I'm an Atlassian and know the internals) I would give them the benefit of the doubt rather than engaging in armchair architecture.