Hacker Newsnew | past | comments | ask | show | jobs | submit | cedricziel's commentslogin

It's interesting: Why is everyone moving to Berlin in the first place? It is not a great city and yet it's compared to Germany as a whole.


This startled me 10y ago when I arrived to Hamburg: why was I only aware of Berlin? I think due to the (failed) international projection of the great diversity of cities and towns in Germany. Hamburg is a very nice place, and greatly unknown outside Germany.


I'm a bit surprised nobody mentions the `--platform` argument that docker accepts to emulate a different architecture per container. It's a very smooth experience if you're reliant on 3rd party images.

Available through compose as well.


I worked with cross-compiling containers. The compiling times were... bad. Our x86 build took like, two minutes (this was a very small and lean C++ application). The arm32v7 ones took upwards of 30 minutes.


Works the other way around as well: Compile platform independent code (such as Java) on --platform=$BUILDPLATFORM in a build stage and then copy into containers that are --platform=$TARGETPLATFORM. That way your build only runs once, natively, but you can produce the correct runtime containers for each architecture rather quickly.


Same experience. Beefy machine, takes 45 minutes to compile cmake using a ppc64le container on an x86_64 host.


I don't want to nitpick, but the Article says Alphabet.


Alphabet is Google.


No, Alphabet is the parent company to Google. There's a slight difference.


A formal difference on paper. Top management actually transferred from google to alphabet when alphabet came to life.


It's Google plus companies that Google has spun off?


Google birthed a parent company called Alphabet.


Well I'll be.. A parent-child relationship more odd than those found in os process hierarchies.


We're using LetsEncrypt with local domains.

We have a domain for internal usage only, where we can modify TXT records. Through this, and a little help from acme.sh, and dnsmasq, every workstation can have unlimited, valid certificates for local projects.


Are you using a fake domain or a real one? If fake, I'd be interested in how that works.


Real domain, just no A record.

For our projects, we create domains like {project}.{workstation}.company.net


Any chance there's a write-up or docs on this somewhere?


I think there's a common misconception with the term "push". HTTP2 doesn't push in terms of a push notification, but rather "pushes" assets down the connection that are known to be needed by the currently transferred document (whatever that may be).

That way, the web server can pro-actively push the named stylesheet to the client as it knows that the stylesheet is needed to render the page. That way the client doesn't have to ask the server (which would result in a new roundtrip).


"that are known to be needed by the currently transferred document"

How does the server knows what the browser/client "needs" ? The client can have the cached stylesheet already. Making the server "in control" seems wrong and make things even more complicated.


That's the thing, it doesn't. HTTP2 Push is one of the big "open field" of HTTP2, and to know whether a document needs to be pushed will rely on good heuristics and black magic.

There is however a small standard that's emerging, pioneered by h2o, called casper (https://h2o.examp1e.net/configure/http2_directives.html#http...). The idea is that all resources ever sent to the client are stored in a probabilistic data structure in a cookie. On every request the structure is sent back to the server, which can then check whether the resource has good chances to be already known by the browser.

By the way there are some benchmarks done by h2o's author here: http://blog.kazuhooku.com/2015/10/performance-of-http2-push-.... The conclusion is all yours.


The client can cancel the push. But yes, there's definitely wasted bandwidth here - the reason to still do it is that connections are now fast enough that the extra download time is small compared to the time required to parse HTML/send new HTTP request/receive response/render CSS.


That's a very Silicon Valley way to look at things. How much does that bandwidth cost on dial-up or 2G?


The client will kill the connection if it has the file cached, sooooo, not much.


I'm more concerned with a more "traditional" setup - say a festival providing WiFi to many people through limited upstream. Used to be, you could provide a caching proxy locally.

With the war on mitm, it's really hard to set up something that scales traffic in this way - even if the actual data requested by clients could readily scale.

I know it's a trade-off between security and features - but it still makes me sad.


It's 2G. By the time the cancel is received by the server, the server will have sent the resource, the bytes will have traveled and the user will be billed.


First you get a PUSH_PROMISE request that is a single frame. It's tiny.

That tells the client what the server wants to send.

The client can respond with a an RST_STREAM frame https://http2.github.io/http2-spec/#RST_STREAM Again, that's a single frame.

By design it's meant to be extremely small and quick even on high latency, and/or low bandwidth connections.


You imply that there is a delay between the promise and the push, but it is not necessarily so. In fact the promise and the data may be sent in the same packet.


The client can disable push, so if it's on 2G, it can avoid this issue entirely.


A copy of the spec can be found here:

https://http2.github.io/http2-spec/#PushResources

There's a few interesting things here that I want to point out: * "A client can request that server push be disabled" This is part an explicit parameter in the client request to a server for anything, https://http2.github.io/http2-spec/#SETTINGS_ENABLE_PUSH.

* "Pushed responses that are cacheable (see [RFC7234], Section 3) can be stored by the client, if it implements an HTTP cache. Pushed responses are considered successfully validated on the origin server (e.g., if the "no-cache" cache response directive is present ([RFC7234], Section 5.2.2)) while the stream identified by the promised stream ID is still open"

Note that pushed content first starts with a PUSH_PROMISE message to the client, which the client can decide on its own volition to reject. Note the spec for a PUSH_PROMISE frame is here, https://http2.github.io/http2-spec/#PUSH_PROMISE and it's extremely small. Even on 2G or dial-up it's by design negligible.

* "Once a client receives a PUSH_PROMISE frame and chooses to accept the pushed response, the client SHOULD NOT issue any requests for the promised response until after the promised stream has closed.

If the client determines, for any reason, that it does not wish to receive the pushed response from the server or if the server takes too long to begin sending the promised response, the client can send a RST_STREAM frame, using either the CANCEL or REFUSED_STREAM code and referencing the pushed stream's identifier. "

Wittingly or otherwise, your message comes across as "everyone on the standards boards are idiots, don't think about anything beyond the valley, and I'm smarter than they are." That's beyond ridiculous. The standard was designed by subject matter experts from right across the world, with interests in web technologies across all sorts of markets, including the developing nations where every single byte is important. There's a lot that has been designed in to the HTTP 2.0 specification to account for that and to explicitly try to improve end user experience under those conditions.


The server doesn't send the data every time. First it sends a data frame letting the client know "hey, I've got this thing if you need it" and the browser can respond with a frame saying "nah, don't need it".


The main problem with http2 push and why it’s pointless is that it’s not cache aware.

So you’re pushing unrequested data to everyone regardless.

h2o tries to solve this problem with a special cookie. More here:

http://blog.kazuhooku.com/2015/12/optimizing-performance-of-...

But without something like that it’s a feature that will never really gain traction.


The danger here is that you push too much, but the actual response will still be delivered almost as fast (due to non-blocking behavior in HTTP/2 connections), so sure, it's not optimal, but there are a lot of use cases besides static assets where it is very useful.


That's exactly my experience. I used the docker omnibus containers to get quickly up and running. Running, supervising and updating is a pleasure in combination with docker-compose.

As for CI, I utilized the docker-machine runner so we could autoscale based on demand.

Big plus: With Google Cloud Platform, we can leverage preemptible VMs for the runners (not for the coordinator of course), so the cost is really low.


Awesome, if you're using Google Cloud you'll be happy to hear that we just merged Kubernetes support for Runner creation https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/merge_r... Giving you the option to use Google Container engine to spin up new Runners.


This is fantastic. Right now I use GCE autoscaling groups to spin up runners while I've got several kubernetes clusters running 24/7. It'll be nice to utilize those for builds (especially with how fast the scheduler is) going forward. Saves money, as well.


Awesome, glad to hear you can make use of this.


This is awesome! Our whole stack runs on Kubernetes, so this kind of integration is very welcome!


Glad to hear that. And many thanks to James Munnelly for being patients with me and the rest of our team over the period of a year to get this done.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: