The title is misleading. 77% slower sounds like the system calls take 1.77x the time on EC2. In fact, the results indicate that the normal calls are 77% faster - in other words, EC2 gettimeofday and clock_gettime calls take nearly 4.5x longer to run on EC2 than they do on ordinary systems.
This is a big speed hit. Some programs can use gettimeofday extremely frequently - for example, many programs call timing functions when logging, performing sleeps, or even constantly during computations (e.g. to implement a poor-man's computation timeout).
The article suggests changing the time source to tsc as a workaround, but also warns that it could cause unwanted backwards time warps - making it dangerous to use in production. I'd be curious to hear from those who are using it in production how they avoided the "time warp" issue.
> Some programs can use gettimeofday extremely frequently
This is what's usually considered the "root cause" of this problem, though. It's easy enough, if it's your own program, to wrap the OS time APIs to cache the evaluated timestamp for one event-loop (or for a given length of realtime by checking with the TSC.) Most modern interpreters/VM runtimes also do this.
Yeah like the php Xdebug extension. In 5.3 at least even with it just loaded and nothing enabled it called gettimeofday 1000s of times and would add seconds to web app render times for me (also on Xen with slow gettimeofday)
This is a big speed hit. Some programs can use gettimeofday extremely frequently - for example, many programs call timing functions when logging, performing sleeps, or even constantly during computations (e.g. to implement a poor-man's computation timeout).
The article suggests changing the time source to tsc as a workaround, but also warns that it could cause unwanted backwards time warps - making it dangerous to use in production. I'd be curious to hear from those who are using it in production how they avoided the "time warp" issue.