Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Random Go fact I learned recently: by default, HTTP requests automatically redirect. If you dont want that, you can do this instead:

    res, err := new(http.Transport).RoundTrip(req)


You can tell http.Client to not redirect:

    client := &http.Client{
        CheckRedirect: func(req *http.Request, via []*http.Request) error {
            return http.ErrUseLastResponse
        },
    }
Then use the client as normal. You can also modify the function for very specific redirect behavior.


Another fun one, in the masochistic sense, is that the default HTTP client has no timeout, and you can't forget to set it yourself if you don't want to potentially leak connections.


The stdlib http client is a bit absurd in some areas, yeah. Timeouts in particular are so confusing and misleading that there are quite a few lengthy blog posts about them alone, e.g. https://blog.cloudflare.com/the-complete-guide-to-golang-net...

Granted, part of it is that it's too complex to be captured by one timeout, but everyone wants one timeout. kinda like string-sub-slicing and UTF-8 multi-byte characters - it's a Bad Idea™ because the over-simplified stuff is fundamentally wrong and will often cause problems. E.g. I routinely encounter tools with short timeouts that don't work on slow network connections (e.g. Bazel), despite actively downloading - what you generally want for user-use is a timeout that ensures the download does not hang forever doing nothing, not the download completes within a specific amount of time.

... but also it's just abnormally bad.


There are fragments of discussion about the download timeout throughout the issue tracker, which end up leading back to this still-open but seemingly forgotten issue about adding InactivityTimeout: https://github.com/golang/go/issues/22982

I'd love to see this one addressed but it's not looking too hopeful at this stage.


I used Go for an HTTP scraper a long time ago and had to go to insane* lengths to get that library to not leak and be performant. Honestly, that was about when I soured on the language and moved on.

* https://github.com/pkulak/simpletransport


How in the world does turning off connection pooling make the client more performant for high request volumes (and presumably on the same domain, if it's a scraper)?


Because when you hit a new domain on nearly every request, a pool just fills up forever with new connections. That should be fine. The pool should just start removing old connections at some point. But Go didn't do that and just filled up forever until file descriptors ran out.


Ah, today you would set MaxIdleConns on the client to avoid this. Even back then I think there was DisableKeepAlives but I would totally believe there was a leak hiding in that ca. 2013.


Yeah, this was forever ago, so on hindsight, maybe I shouldn't have even brought it up. haha


No one in their right mind should be using the default HTTP client or server. They only exist to allow you to quick hack something together. For any serious application you should always define your own.


I think the introduction of contexts has largely resolved this footgun... As long as you actually use contexts, which many still don't. :(




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: