Congrats on the launch! It's great to see more innovation and choice in the protobuf/gRPC space. Definitely planning to try this out.
For personal projects, I typically have a single Go binary that serves (1) gRPC, (2) grpc-web (using the Improbable library), and (3) gprc-gateway. Along with protobuf-ts for the client, IMO it's quite a pleasant stack. Glad to see timostamm involved in protobuf-es :)
I'm curious: Can you say more about why you ported the code generator from Go to TypeScript (https://github.com/bufbuild/connect-web/commit/8002c2696aad0...)? Was it is easier to generate code this way, or did it just get too unwieldy to package Go binaries in NPM modules?
Timo is amazing, and we wouldn't have been able to build any of this without him :)
We ported the code generators to TypeScript because - as you guessed - it was a bit of a pain to package them for NPM. We also felt that it would be more approachable for TypeScript developers to read, which we hoped would contribute to a sense that all of this isn't actually all that complex. We were a bit worried that performance might be perceivably worse, but the difference wasn't significant.
Improbable's grpc-web [1] works pretty well. I have been using it along with their new websocket transport for about two months and as an RPC layer its great.
However I do miss being able to inspect the traffic in devtools, and the TS sdk is still not ESM friendly and requires jumping through hoops to get working with vite.
So we ended up bundling it separately through esbuild (along with other google protobuf deps etc.) to a large single file ESM module that other projects can easily consume.
Buf seems to be a solution that handles all of this better. Very excited to try this out.
You maintain a list of packages in a Cask file and run "cask" to automatically download and install them from a repository like MELPA. Like Gemfile or requirements.txt, but for Emacs.
I found this really reduced the number of elisp snippets I've had to write or grab from around the web. Most popular packages are on MELPA, and there's usually a more polished way to accomplish things I was hacking together myself.
Hi! I'm the creator of Shipway. I'd love to get any feedback folks have. I'm also happy to answer questions.
The overall idea is to make it as easy as possible to go from source code in a Git repository to a running application. Shipway makes heavy use of the GitHub API and tries to act as a thin layer on top. It sets up a GitHub hook to trigger a Docker build when you push commits. It hosts the resulting Docker images in its own registry, and can execute webhooks after successful builds.
There are a few similar products out there, namely Docker Hub and quay.io. Both are interesting in their own way, but neither allows you to carry over your GitHub organizations and teams and use them with Docker repositories.
I'm excited about all the container hosting options that are starting to mature. Once the Kubernetes API stabilizes, it seems like it could become a standard interface for running containers on any cloud. The EC2 Container Service also looks interesting.
When I used Graphite with Nagios, I bypassed all the Nagios data-collection and graphing features. Instead, I funneled all the data into Graphite and used check-graphite to alert on it:
For personal projects, I typically have a single Go binary that serves (1) gRPC, (2) grpc-web (using the Improbable library), and (3) gprc-gateway. Along with protobuf-ts for the client, IMO it's quite a pleasant stack. Glad to see timostamm involved in protobuf-es :)
I'm curious: Can you say more about why you ported the code generator from Go to TypeScript (https://github.com/bufbuild/connect-web/commit/8002c2696aad0...)? Was it is easier to generate code this way, or did it just get too unwieldy to package Go binaries in NPM modules?