The REST architecture always included the possibility of gateways and proxies in the end-to-end communication path to delegate shared responsibilities out of the user agent or origin server. This balances the need for centralized admin of some things and decentralized deployment of other things. Most microservices systems, even if they're not using HTTP in favor of something like gRPC, Kafka, or Rabbit, are taking a lot of WebArch lessons to heart in how they manage their policies, routes, etc. balancing centralized management over decentralized evolution.
The problem with SOA-in-practice was that everything flowed through a monolithic ESB as both client and origin server, that needed to have omniscient knowledge of every route, transformation, etc., and was often a single administrative bottleneck, fault domain, etc. Some SOA frameworks had service mesh patterns where you could deploy decentralized engines with your services, but without cloud IaaS/PaaS circa 2006-2007, there was no way to maintain/deploy/upgrade these policy agents without a heavy operational burden.
In sum: CORBA, COM+ or SOAP/HTTP were about mostly-centralized approaches to distributed services, REST was about extreme decentralized evolution over decades, most are looking for something akin to a dial where they can have something a bit more controlled than dozens of independent gRPC/HTTP/Rabbit/Kafka producers-consumers but not stupid like the SOAP/HTTP days.
Modern cloud native service mesh approaches like this Istio thing (NetflixOSS Zuul+Eureka+Ribbon or Linkerd are alternatives) are just decentralized gateways and proxies, possibly with a console/management appliance that makes it easy to propagate changes out across a subset of your microservices. This has the benefit of allowing you to default to decentralized freedom for your various microservices but for areas where you want administrative control over policy over a set of them , you don't have to go in and tweak 15 different configs.
NetflixOSS really pioneered this pattern. Netflix managed to use things like Cassandra and Zuul hot-deploy filters as the means to updated routing/health/balancing configs across their fleet of microservices. There are alternative ways to handle this of course - Hashicorp's Consul piggybacks DNS and expects your client to figure things out via their REST API or DNS queries. There are also things like RabbitMQ or a REST-polling mechanism to propagate config changes perhaps, as not everyone wants Cassandra. New frameworks like Istio or Linkerd are further alternatives. We're spoiled for choice, better or worse..
Besides Netflix, I'd put Twitter as an early pioneer, with their work on Finagle. Both of these companies, for better or worse, took a library-centric approach (Eureka/Hystrix/etc or the Finagle lib). This limited their applicability to the JVM.
The sidecar model that AirBnb pioneered with SmartStack, later adopted by Yelp and others was the cheapest way to get non-Java langs to have similar resilience/observability semantics. And now given the popularity of polyglot architectures, it's probably should be the default choice for companies adopting microservices.
Maybe a local proxy, deployed with the service, is a good answer to my objections, rather than having a centralised approach. This can help in polyglot environments, but remove any limitations a centralised solution would impose. Something like Istio would be an agent a service connects to locally, used for service discovery, complex routing or rate limiting. The configuration is service specific. Load balancing is done by "dumb" proxies, like in the old days.
Maybe you don't understand how Istio works. The Envoy proxy is locally deployed as a sidecar next to each process. The centralization is entirely for the control plane. The local proxy uses the centrally managed configuration for making local decisions about routing.
As long as proxies remain transparent to services, I see no problem. It becomes a problem, when proxies are getting smarter in terms of providing cross cutting features, like routing on payload level (not speaking of message headers) or do authentication and authorization on a per-resource level. That puts constraints on how services are built in this particular environment.
But I see the logic behind approaches, developed by Netflix, and now Istio. If you have a lot of services, orchestration and more central communication management is probably a good way to govern, if the constraints (described above) are accepted and services still have the ability to opt out and pursue a different strategy.
The old SOA world was driven by governance. This was a result of the general engineering methodologies and mindsets of this time. Still, API Gateways / smarter proxies / etc. could bring that back...
The commercial products in this space (Apigee, Layer 7, MuleSoft and the like) seem to have learned their lessons over the years, but we'll see. Things like Eureka require RESTful discovery protocols that aren't exactly standards, for example, and rely on well-written client libraries.
For me "SOA" doesn't imply "an ESB" in the way you seem to understand the term, and even though I've actually worked with Sonic MQ/ESB which brought the name to the scene, I still don't know what people really mean when speaking about "an ESB".
From a developer perspective, service-oriented just means that you're offering/accessing functionality via a well-defined app-specific network protocol interface with a standard taxonomy/representation of cross-cutting concerns such as auth, transactions/compensations, message synchronicity and QoS semantics (eg. request/response, at-least once delivery etc.), most of which define the shape of your service implementation code fundamentally. For example, if you're operating under the assumption that no distributed transactions are available, you'll have to fold the necessary logic for restarting and state management into your application code.
SOA doesnt have to imply an ESB, true, but it usually did in practice. I too worked with Sonic , IBM broker, Mule, but mostly BEA WebLogic and AquaLogic. What was often missing in popular SOA was services-oriented delivery, where the unit of deployment was independently evolvabke from others. It was more focused on interface modularity for some future implementation decomposition of the monolith. It fulfilled half the problem.
Microservices have been enabling a new generation to accomplish this decomposition through by focusing on services oriented delivery and deployment, and by dramatically constraining the protocols to HTTP, maybe some pub/sub or gRPC, not likely many others (and thus no distributed transactions, simpler QoS levels, etc).
Very well put. From what I see this is like moving hysterix from the sidecar level to the network level. No need for applications to care of even know about leveraging the circuit-breaking. Very excited to try it out!
I'm curious if circuit breaking is better at the method level of the code ala Hystrix or at the network level. We need more in-the-wild experience reports I think...
The thing you can do at the code level is integrate it with your exceptions and protect against software integration bugs. Imagine that your RPC against a new version of a service fails because of bad data. With a library level circuit breaker, you can catch the failed exception and blacklist the new version of the service. With the network level, your resolution of failure detection is more limited.
The problem with SOA-in-practice was that everything flowed through a monolithic ESB as both client and origin server, that needed to have omniscient knowledge of every route, transformation, etc., and was often a single administrative bottleneck, fault domain, etc. Some SOA frameworks had service mesh patterns where you could deploy decentralized engines with your services, but without cloud IaaS/PaaS circa 2006-2007, there was no way to maintain/deploy/upgrade these policy agents without a heavy operational burden.
In sum: CORBA, COM+ or SOAP/HTTP were about mostly-centralized approaches to distributed services, REST was about extreme decentralized evolution over decades, most are looking for something akin to a dial where they can have something a bit more controlled than dozens of independent gRPC/HTTP/Rabbit/Kafka producers-consumers but not stupid like the SOAP/HTTP days.
Modern cloud native service mesh approaches like this Istio thing (NetflixOSS Zuul+Eureka+Ribbon or Linkerd are alternatives) are just decentralized gateways and proxies, possibly with a console/management appliance that makes it easy to propagate changes out across a subset of your microservices. This has the benefit of allowing you to default to decentralized freedom for your various microservices but for areas where you want administrative control over policy over a set of them , you don't have to go in and tweak 15 different configs.
NetflixOSS really pioneered this pattern. Netflix managed to use things like Cassandra and Zuul hot-deploy filters as the means to updated routing/health/balancing configs across their fleet of microservices. There are alternative ways to handle this of course - Hashicorp's Consul piggybacks DNS and expects your client to figure things out via their REST API or DNS queries. There are also things like RabbitMQ or a REST-polling mechanism to propagate config changes perhaps, as not everyone wants Cassandra. New frameworks like Istio or Linkerd are further alternatives. We're spoiled for choice, better or worse..