HTML is a hypermedia format, the most widely used, and it's designed mainly for human consumption. Machines parsing and using something is too broad an idea to engage with meaningfully: browsers parse HTML and do something with it: they present it to humans to select actions (i.e. hypermedia controls) to perform.
I've programmed machines to use those links so I'm pretty certain machines can use it. I've never heard of the HTML variation but so will have a look at those links.
> I've programmed machines to use those links so I'm pretty certain machines can use it
I'm curious to learn how it worked.
The way I see it, the key word here is "programmed". Sure, you read the links from responses and eliminated the need to hardcode API routes in the system, but what would happen if a new link is created or old link is unexpectedly removed? Unless an app somehow presents to the user all available actions generated from those links, it would have to be modified every time to take advantage of newly added links. It would also need a rigorous existence checking for every used link, otherwise the system would break if a link is suddenly removed. You could argue that it would not happen, but now it's just regular old API coupling with backward compatibility concerns.
Building on my previous example of hn comments, if hn decides to add another action, for example "preview", the browser would present it to the user just fine, and the user would be able to immediately use it. They could also remove the "reply" button, and again, nothing would break. That would render the form somewhat useless of course, but that's the product question at this point
This aligns with how I've seen it used. It helps identify forward actions, many of which will be standard to the protocol or to the domain. These can be programmed for and traversed, called or aggregated data presented using general or specific logic.
So New actions can be catered for but Novel actions cannot, custom logic would require to be added. Which then becomes part of the domain and the machine can now potentially handle it.
Hope that helps illustrate how it can be used programmatically.
Right, so that means that the primary novel aspect of REST (according to the coiner of that term, Roy Fielding), the uniform interface, is largely wasted. Effectively you get a level of indirection on top of hard-coded API response endpoints. Maybe better, but I don't think by much and a lot of work for the payoff.
To take advantage of the uniform interface you need to have a consumer with agency who can respond to new and novel interactions as presented in the form of hypermedia controls. The links above will explain more in depth, the section on REST in our book is a good overview:
Your understanding is incorrect, the links above will explain it. HATEOAS (and REST, which is a superset of HATEOAS) requires a consumer to have agency to make any sense (see https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...)
MCP could profitably explore adding hypermedia controls to the system, would be interesting to see if agentic MCP APIs are able to self-organize:
https://x.com/htmx_org/status/1938250320817361063