They are re learning those lessons slowly. I.e. OpenAPI and json schema are pretty much poor re implementations of SOAP and XSD but for json. I don't want to be that get off my lawn guy but it's laughable how equivalent they are for 99% of daily use cases.
Every time I hear someone talking about validating JSON I just think about how, despite its flaws, XSD is actually pretty decent despite being 20 years old.
The first time I used XSD was in 2001 I think, for a format we were developing to do human rights violation reporting.
In one part of the document would be a list of people committing violations, another part a list of people witnessing violations, and then in another part violations.
The violations part would have attributes saying what people had taken part in the violation, and who had witnessed. These attributes were a comma separated list of ids.
This structure is of course easy to represent with XPath, probably via Schematron. But there is no real way to represent this kind of context dependent structure in the first version of XSD (I have not kept up for reasons that shall become clear).
Which led me to declare that XSD sucks. Although I have to admit that Henry Thompson is a great programmer, and one of the most worthwhile speakers on technical issues I have ever had the pleasure to here, and while his model of XSD validation as a finite state machine is also elegant it still does not make it suck any less because the standard could not validate many common markup structures.
Yeah I also hit the XSD complexity ceiling on occasion. Most of what we used it for back in the early-mid 00s was quite simple though and it did make much of our work simpler.
That XSD is complex is one thing, some things are by their nature complex, ok, that it was not able to validate common XML structures was another thing, a third thing was cross validation tool support sucked.
Actually this combination of difficulties actually made it sort of enjoyable for me to solve problems stemming from it's usage, and looking smart for doing so, while still feeling like I was using a deficient tool.
XML and its associated formats were just so complex. I remember considering getting a book on XML and it was 4 inches thick. Just for a text-based data storage format...
This is just prohibitively complex. Formats like JSON and YAML thrive because they don't have the complexity of trying to fit every possible scenario ever. The KISS principle still works.
2. These XML books tended to have section on XML and well formedness, namespaces, UTF-8, examples of designing a format - generally a book or address format - all this stuff probably came in to approximately 80-115 pages. Which was what you needed to understand the basis of XML.
3. Then would come the secondary stuff to understand, XPath and XSLT. I would say this would be another 100 - 150 pages, so a query language and a programming DSL to manipulating the data/document format. All this together 265 pages.
4. Then validation and entities in DTDs noting that this was old stuff from SGML days and you didn't need it and there was going to be some other way to validate really soon. Another 60 pages? (and then when that validation language came it sucked, as I noted elsewhere)
5. Then because tech books need to be thick and a 300 page book is not big enough a bunch of stuff that never amounted to anything, like Xlink or some breathless stuff about some XML formats, maybe a talk about SVG and VML, XSL-FO blah blah blah. Another 300 pages of unnecessary stuff.
I guess nobody knows about document formats anymore.