It makes it really nice to define APIs (like with Openapi of swagger). There is a bunch of code generators out there to produce code for your definitions to have a native swift , objective , Java, Go api stubs for either clients or servers.
It is a joy to work with in cross functional teams and define your APIs whilst taking into account what Api versioning would mean, how to define enums, how to rename field names whilst being compatible with the transport protocol and other things.
Also if you were to route a payload from service A via B to C and each service is deployed independently and gets new Api changes, gRPC supports you in how to handle ther Szenarios.
Sure enough openapi can do all of this I guess but grpc definitions in Protobuf or Google artman are just way quicker to understand and work with. (At least for me)
Not familiar with gRPC, questions: how does the tooling compare to HTTP? Browser devtools lets you look what's on the wire, replay requests with slight alterations for debugging, have timelines and visualizations for the history of communication, extract and send self contained scriptlets (like you can do with curl) to someone else, etc. Which of these have some equivalents in generally available gRPC tooling?
There is also Charlesproxy which supports Protobuf.
But from my experience you use the code generator and trust the deserializer and serializer since they are unit tested.
So you can just unit test your methods and don’t have to look at the actual binary blob payload.
You trust that gRPC is battle tested and you can just test your code.
You would probably wrap the generated methods/objects/structs in your own domain model and unit test the mapping between them.
Using the objects from gRPC throughout your code directly does work but sometimes is not what you want to work with.
So I rather would introduce another boundary to the transport. But that is personal preference (in case I want to get rid of gRPC and don’t want to touch my business logic)
In general, not nearly as mature. In general though, gRPC is not for browser->server calls (grpc-web notwithstanding) but is designed for server<->server communication.
There is some tooling out there for development (https://github.com/fullstorydev/grpcurl and https://github.com/fullstorydev/grpcui are pretty nice) but it's still much less mature than the massive amount of mature tooling available for HTTP-based services. And that is both an artifact of gRPCs relative youth compared to REST and also for some more fundamental reasons (binary wire format, mutual TLS based authentication, etc).
All that said, I've been working with gRPC over the past 6 months or so and overall the development experience is much nicer on net I think.
There's grpcurl and other similar tools for when you just want to run a simple gRPC request against a server. If you server runs the reflection service, it will also let you inspect the schema of whatever is running on a given endpoint.
For in-browser use with gRPCweb, if you use test-proto-on-XHR, things continue to work as with REST/JSON.
For inter-server debugging, you usually defer to opentracing or similar, and capture request data there.
No, gRPC/protobuf instead provides you with ways to evolve your schema easily in the IDL and the result on the wire, without breaking either side.
You can rename fields (but keep the tag number and therefore wire format compatibility), add fields (which will be ignored by the other side), remove fields (as all are explicitly optional so every consumer explicitly checks for their presence anyway), ignore unset fields (as the wire encoding is to a certain-degree self-describing), etc.
Another important part about forward-and-backward-compat is that protos support passing unknown fields. If I add a new field into a shared proto that A, B, and C all use if A and C have been updated but B was never updated as long as B uses the proto correctly it will have the new field delivered to C.
I use this at my current job where our client is a hardware appliance that we are not at all allowed to update so, if we need to add new data for our backend to handle that the client downloads locally we can and we don't need to worry about pushing new client code to do it.
This is magic for anyone who has been using Retrofit or something similar and sees fields dropping as normal.
Also context propagation is part of gRPC which supports you in thinking about tracing, request cancellation, deadlines so that you actually have a chance to employ SLOs
Sure enough openapi can do all of this I guess but grpc definitions in Protobuf or Google artman are just way quicker to understand and work with. (At least for me)