Discoverability by whom, exactly? Like if it's for developer humans, then good docs are better. If it's for robots, then _maybe_ there's some value... But in reality, it's not for robots.
HATEOAS solves a problem that doesn't exist in practice. Can you imagine an API provider being like, "hey, we can go ahead and change our interface...should be fine as long as our users are using proper clients that automatically discover endpoints and programmatically adapt accordingly"? Or can you imagine an API consumer going, "well, this HTTP request delivers the data we need, but let's make sure not to hit it directly -- instead, let's recursively traverse a graph of requests each time to make sure this is still the way to do it!"
You have got it wrong.
Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side. This is the discoverability. It does not imply generated interfaces, UI may know something about the data in advance.
> You have got it wrong. Let's say I build some API with different user roles. Some users can delete an object, others can only read it. The UI knows about the semantics of the operations and logical names of it, so when UI gets the object from server it can simply check, if certain operations are available, instead of encoding the permission checking on the client side.
Maybe you should reconsider the way you ask questions on this forum. Your tone is not appropriate and the question itself just demonstrates that you don't understand this topic.
Yes, I'm aware of this header and know the web standards well enough.
In hypermedia API you communicate to client the list of all operations in the context of the resource (note: not ON the resource), which includes not only basic CRUD but also operations on adjacent resources (e.g. on user account you may have an operation of sending a message to this user). Yes, in theory one could use OPTIONS with a non-standard response body to communicate such operations that cannot be expressed in plain HTTP verbs in Allow header.
However such solution is not practical, because it requires an extra round trip for every resource. There's a better alternative, which is to provide the list of operations with the resource using one of the common standards - HAL, JSON-LD, Siren etc. The example in my another comment in this thread is based on HAL. If you wonder what is that, look no further than at Spring - it does support HAL APIs out of the box from quite a long time. And of course there's an RFC draft and a Wikipedia article (https://en.wikipedia.org/wiki/Hypertext_Application_Language).
This is actually what we do at [DAYJOB] and it's been working well for over 12 years. Like any other kind of interface indirection it adds the overhead of indirection for the benefit of being able to change the producer's side of the implementation without having to change all of the consumers at the same time.
In this example you receive list of permitted operations embedded in the resource model. href=. means you can perform this operation on resource self link.
The promise of REST and HATEOAS was best realized not by building RESTful apps like say "my airline reservation app" but by building a programming system, spiritually like HTTP + HTML, in which you'd able to declaratively specify applications, of which "my airline reservation app" could be one and "my sports gambling service" could be another. So some smart person would invent a new application protocol with rich semantics as you did above, and a new type of user agent installed on desktops understands how to present them to the user, and the app on the server just assembles the resources in this rich format, directing users to their choices through the states of hte program.
So that never got done (because it's complex) and people started building apps like "my airline reservation app" but then realized to to build that domain app you don't need all the abstraction of a full REST system.
Oh, interesting. So rather than the UI computing what operations should be allowed currently by, say, knowing the user's current role and having rules baked into it about the relationship between role and UI widgets, the UI can compute what motive should be in or simply off of explicit statements or capability from the server.
I can see some meat on these bones. The counterpoint is that the protocol is now chattier than it would be otherwise... But a full analysis of bandwidth to the client would have to factor that you have to ship over a whole framework to implement those rules and keep those rules synchronized between client and server implementation.
I’d suggest that bandwidth optimization should happen when it becomes critical and control presence of hypermedia controls via feature flag or header. This way frontend becomes simpler, so FE dev speed and quality improves, but backend becomes more complex. The main problem here is that most backend frameworks are supporting RMM level 2 and hypermedia controls require different architecture to make server code less verbose. Unfortunately REST wasn’t understood well, so full support of it wasn’t in focus of open source community.
Or probably just an Allow header on a response to another query (e.g. when fetching an object, server could respond with an Allow: GET, PUT, DELETE if the user has read-write access and Allow: GET if it’s read-only).
That’s a neat idea actually, I think I’ll need to read up on the semantics of Allow again…. There is no reason you couldn’t just include it with arbitrary responses, no?
I always thought soooo many REST implementations and explainers were missing a trick by ignoring the OPTIONS verb, it seems completely natural to me, but people love to stuff things inside of JSON.
It’s something else. List of available actions may include other resources, so you cannot express it with pure HTTP, you need a data model for that (HAL is one of possible solutions, but there are others)
That API doesn’t look like REST level 3 API. For example, there’s an endpoint to create a node. It is not referenced by root or anywhere else. GetNode endpoint does include some traversal links in response, but those links are part of domain model, not part of the protocol. HAL does offer a protocol by which you enhance your domain model with links with semantics and additional resources.
It is interesting to me that GraphQL would be in "the swamp of POX," mostly because personal experience was that shifting from hand-built REST to GraphQL solved a lot of problems we had. Mostly around discovery and composition; the ability to sometimes ask for a little data and sometimes a lot at the same endpoint is huge, and the fact that all of that happens under the same syntax as opposed to smearing out such controls over headers, method, URI, and query params decreased cognitive load.
Perhaps the real issue was that XML is awful and a much thinner resource representation simplifies most of the problems for developers and users.
> If it's for robots, then _maybe_ there's some value...
Nah, machine readable docs beat HATEOAS in basically any application.
The person that created HATEOAS was really not designing an API protocol. It's a general use content delivery platform and not very useful for software development.
The problems do exist, and they're everywhere. People just invented all sorts of hacks and workarounds for these issues instead of thinking more carefully about them. See my posts in this thread for some examples:
For most APIs that doesn’t deliver any value which can’t be gained from API docs, so it’s hard to justify. However, these days it could be very useful if you want an AI to be able to navigate your API. But MCP has the spotlight now.
I think you throw away a useful description of an API by lumping them all under RPC. If you tell me your API is RPC instead of REST then I'll assume that:
* If the API is available over HTTP then the only verb used is POST.
* The API is exposed on a single URL and the `method` is encoded in the body of the request.
It is true, if you say "RPC" I'm more likely to assume gRPC or something like that. If you say "REST", I'm 95% confident that it is a standard / familiar OpenAPI style json-over-http style API but will reserve a 5% probability that it is actually HATEOAS and have to deal with that. I'd say, if you are doing Roy Fielding certified REST / HATEOAS it is non-standard and you should call it out specifically by using the term "HATEOAS" to describe it.
People in the real world referring to "REST" APIs, the kind that use HTTP verbs and have routes like /resource/id as RPC APIs. As it stands in the world outside of this thread nobody does that.
At some level language is outside of your control as an individual even if you think it's literally wrong--you sometimes have to choose between being 'correct' and communicating clearly.