I'm not necessarily questioning the accuracy, just that they generally don't consider the affinities and a lot of software assumes that number of cpus matches the concurrency available. If only half the cpus are in the affinity set, the processes/threads could generally contend twice as much as possible. I guess depending on how the number is used it could improve the throughput though (e.g. blocking on i/o).
Depends on the language used in the spec, but "CPUs available for scheduling" seems like the definition most software should use. However, I suspect most software is built using an interface that returns the total CPU count for the machine.
At the very least you should be aware that these counts usually count cores with hyperthreading twice - and hyperthreading does indeed provide opportunity for increased parallelism but it is noticeably worse than having another separate physical core.
Right. Currently runtime.NumCPU tries to be fancy by looking at the population count of the cpuset mask[1]. However in a hosted environment using containers there's no reason to believe that the cpuset will remain fixed over the life of the process. This can undercount the available CPUs, leaving you with a GOMAXPROCS that is too low.
Anecdotally, it's very often not bad, (and in fact sometimes "good") to over-provision MAXPROCS. We have used as much as 3 to 6x the number of hyperthreaded cores with good results, depending on the workload. This could insulate you against some container changes.