I used Go for an HTTP scraper a long time ago and had to go to insane* lengths to get that library to not leak and be performant. Honestly, that was about when I soured on the language and moved on.
How in the world does turning off connection pooling make the client more performant for high request volumes (and presumably on the same domain, if it's a scraper)?
Because when you hit a new domain on nearly every request, a pool just fills up forever with new connections. That should be fine. The pool should just start removing old connections at some point. But Go didn't do that and just filled up forever until file descriptors ran out.
Ah, today you would set MaxIdleConns on the client to avoid this. Even back then I think there was DisableKeepAlives but I would totally believe there was a leak hiding in that ca. 2013.
* https://github.com/pkulak/simpletransport