The sequence of turbo pascal / delphi / c# / typescript which brought us LSP as a sidekick (!) IMHO has benefitted the whole industry at least as much as "transpile c# to ecma script via typescript" . no. much much much more.
I do not see a problem with MS also having an internal use case .
you know I wouldn't stop using python "because" Guido now works at MS ...
Python has an elected steering council and core team. The governance process explicitly tries to avoid conflict of interest by disallowing more than two steering council members working for the same employer. See PEP 13 [1].
By contrast, .NET is controlled by Microsoft (with veto over board decisions [2] and code changes [3]), integrates Microsoft's telemetry to send your data to Microsoft by default [4] and deliberately hobbles features to benefit Microsoft [5].
The complaint above was that JS was becoming too much like C#, so the steering committee of .NET isn't the one of the original concern. (Also, as pointed out, that "deliberate hobbling" case was litigated in the public square on HN at the time and then revised and "unhobbled" after the outcry.)
As far as the other direction, JS has a somewhat similar (but rather more complex) situation to Python with its steering committee being Ecma International's TC39 (Technical Committee 39).
Ecma International has similar By-Laws and Rules designed to manage conflict of interest and too much power consolidate in a single employer of committee members. Ecma is maybe even a little "stricter" than Python because its rules consider the companies themselves to be the members, and companies only get one vote no matter how many employees interact with the process.
FVWM, TWM and others had a grid-like move/resize option where instead of redrawing the whole window you would just move a wireframe around until you placed the window 'down'.
that's underselling xml. xml is explicitly meant for data serialization and exchange, xsd reflects that, and it's the reason for jaxb Java xml binding tooling.
get me right: Json is superior in many aspects, xml is utterly overengineered.
but xml absolutely was _meant_ for data exchange, machine to machine.
The design goals for XML are:
XML shall be straightforwardly usable over the Internet.
XML shall support a wide variety of applications.
XML shall be compatible with SGML.
It shall be easy to write programs which process XML documents.
The number of optional features in XML is to be kept to the absolute minimum, ideally zero.
XML documents should be human-legible and reasonably clear.
The XML design should be prepared quickly.
The design of XML shall be formal and concise.
XML documents shall be easy to create.
Terseness in XML markup is of minimal importance.
Or heck, even more concisely from the abstract: "The Extensible Markup Language (XML) is a subset of SGML that is completely described in this document. Its goal is to enable generic SGML to be served, received, and processed on the Web in the way that is now possible with HTML. XML has been designed for ease of implementation and for interoperability with both SGML and HTML."
It's always talking about documents. It was a way to serve up marked-up documents that didn't depend on using the specific HTML tag vocabulary. Everything else happened to it later, and was a bad idea.
the origin of the latter, the edi/xml WG, was the successor of an edi/sgml WG which had started in the early 1990, and was born out of the desire to get a "universal electronic data exchange" that would work cross platform, vms, mainframes, unix and even DOS hehe, and to leverage the successful sgml doc book interoperability.
was it niche? yes. was it starting in sgml already? and baked into xml/xsd/xslt? I think so.
>XML shall be straightforwardly usable over the Internet.
is machine to machine communication
to me, XML is an example of worse is better, or rather, better is worse. it would never have come out of Bell Labs in the early 70s. Neither would JSON for that matter.
And as for JAXB, it was released in 2003, well into XML's decadent period. The original Java APIs for XML parsing were SAX and DOM, both of which are tag and document oriented.
if you don't have a merge from main into the branch further down, then git only bothers you about the most recently introduced conflicts conflicts --- the ones you'd have to resolve anyhow, and it remembers how you've resolved those.
2. apt repositories are cryptographically signed, centrally controlled, and legally accountable.
3. apt search is understood to be approximate, distro-scoped, and slow-moving. Results change slowly and rarely break scripts. PyPI search rankings change frequently by necessity
4. Turning PyPI search into an apt-like experience would require distributing a signed, periodically refreshed global metadata corpus to every client. At PyPI’s scale, that is nontrivial in bandwidth, storage, and governance terms
5. apt search works because the repository is curated, finite, and opinionated
The install side is basically Merkle-friendly (immutable artifacts, append-only metadata, hashes, mirrors).
Search isn’t. Search results are derived, subjective, and frequently rewritten (ranking tweaks, spam/malware takedowns, popularity signals). That’s more like constantly rebasing than appending commits.
You can Merklize “what files exist”; you can’t realistically Merklize “what should rank for this query today” without freezing semantics and turning CLI search into a hard API contract.
The sequence of turbo pascal / delphi / c# / typescript which brought us LSP as a sidekick (!) IMHO has benefitted the whole industry at least as much as "transpile c# to ecma script via typescript" . no. much much much more.
I do not see a problem with MS also having an internal use case .
you know I wouldn't stop using python "because" Guido now works at MS ...
reply