Are we rediscovering SOA and WSDL, but this time for LLM interop instead of web services? I may be wrong, but I'm starting to wonder whether software engineering degrees should include a history subject about the rise and fall of various architectures, methodologies and patterns.
I wasn't around for WSDL so please correct me if I am wrong - but the main weakness of WSDL was that no applications were able to take advantage of dynamic service and method discovery? A service could broadcast a WSDL but something needed to make use of it, and if you're writing an application you might as well just write against a known API instead of an unknown one. LLMs promise to be the unstructured glue that can take advantage of newly-discovered methods and APIs at runtime.
I was unfortunate enough to work with SOAP and WSDL. There was a pipedream at the time of automatically configuring services based on WSDL but it never materialized. What it was very good at (and still has no equal to my mind) was allowing for quick implementation of API boilerplate. You could point a service at the WSDL endpoint (which generally always existed at a known relative URL) and it would scaffold an entire API client for whatever language you wanted. Sort of like JSON Schema but better.
This also meant that you could do things like create diffs between your current service API client and an updated service API client from the broadcasting service. For example, if the service changed the parameters or data objects, deprecated or added functions then you could easily see how your client implementation differed from the service interface. It also provided some rudimentary versioning functionality, IIRC. Generally servers also made this information available with an HTML front-end for documentation purposes.
So while the promise of one day services configuring themselves at runtime was there, it wasn't really ever an expectation. IMO, the reason WSDL failed is because XML is terrifically annoying to work with and SOAP is insanely complex. JSON and REST were much simpler in every way you can imagine and did the same job. They were also much more efficient to process and transmit over the network. Less cognitive load for the dev, less processor load, less network traffic.
So the "runtime" explanation isn't really valid as an excuse for it's failure, since the discovery was really meant more in practice like "as a programmer you can know exactly what functions, parameters, data-objects any service has available by visiting a URL" and much less like "as a runtime client you can auto-configure a service call to a completely new and unknown service using WSDL". The second thing was a claim that one-day might be available but wasn't generally used in practice.
Some of us are still building new products with XML RPC techniques.
WSDLs and XSDs done right are a godsend for transmitting your API spec to someone. I use .NET and can call xsd.exe to generate classes from the files in a few seconds. It "just works" if both sides follow all of the rules.
The APIs I work with would be cartoonish if we didn't have these tools. We're talking 10 megabytes of generated sources. It is 100x faster to generate these types and then tunnel through their properties via intellisense than it is to read through any of these vendors' documentation.
> WSDLs and XSDs done right are a godsend for transmitting your API spec to someone. I use .NET and can call xsd.exe to generate classes from the files in a few seconds.
This sounds like protobuf and gRPC. Is that a close analogy?
It's like those things, but I've never seen protobuf or gRPC used for APIs this extensive.
The tooling around these paths is also lackluster by comparison if you're using something like Visual Studio.
I'd rather fight XML namespaces and HTTP/1.1 transports than sort through the wreckage of what "best practices" has recently brought to bear - especially in terms of unattended complexity in large, legacy enterprises. Explaining to a small bank in Ohio that they're going to need to adjust all of their firewalls to accommodate some new protocols is a total nonstarter in my business.
The latter would add subscription and streaming, and more efficient transports. But yeah they are basically the same idea.
I hate that for years the concept of RPC was equated to XML which in turn equated to some implementation of the (XML based) tool and then a whole lot of distracting discourse around XML vs JSON, we kinda do still have that these days with yaml vs whatever.
We have already been through some generations of this rediscovery an I've worked at places where graphql type importing, protobuf stub generation etc. all worked in just the same way. There's a post elsewhere on HN today about how awesome it is to put your logic _in the database_ which I remember at least two generations of, in the document DB era as well as the relational era.
If there's one thing I've observed about developers in general, it's that they'd rather build than learn.