Correct. No fixed sized pages, but dynamic batches based on the lastEventId.
This is much easier to implement, both in server and client side, and it greatly removed the amount of data transferred. With fixed pages you would return content of the latest page for every poll request until it is "full".
A few issues with message brokers, esp. in the system-to-system integration:
- Security: In B2B scenarios or public APIs would you open your broker to the WWW? HTTP has a solid infrastructure, including firewalls, ddos defence, API gateways, certificate management, ...
- Organisational dependencies: Some team needs to maintain the broker (team 1, team 2, or a third platform team). You have a dependency to this team, if you need a new topic, user, ... Who is on call when something goes wrong?
- Technology ingestion: A message broker ingests technology into the system. You need compatible client libraries, handle version upgrades, resilience concepts, learn troubleshooting...
> Security: In B2B scenarios or public APIs would you open your broker to the WWW? HTTP has a solid infrastructure, including firewalls, ddos defence, API gateways, certificate management, ...
That's a valid point. I think it's a pity we don't have an equivalent standard for asynchronous messaging with the same support as HTTP. However, there are lots of options for presenting an asynchronous public API that would use your message broker behind the scenes, without fully exposing it: Websockets, SSE, web hooks, etc...
> Organisational dependencies: Some team needs to maintain the broker (team 1, team 2, or a third platform team). You have a dependency to this team, if you need a new topic, user,
True, but don't you have that anyway? How is this different from requesting a new database, service route, service definition, etc?
> Technology ingestion: A message broker ingests technology into the system. You need compatible client libraries, handle version upgrades, resilience concepts, learn troubleshooting...
How simple or complex this is depends on the concrete broker at hand. There are some protocols, e.g. STOMP that are simple enough that you could write your own client. And as I wrote in the parent: HTTP feeds are a technology as well. You'll have to think about troubleshooting and resiliency there as well.
Hi! I am the author of http-feeds.org. Thank you for your feedback.
For this spec I aimed to keep it as simple as possible. And plain polling-based JSON Endpoints are the most simple and robust endpoints IMHO.
If you need, you could implement an SSE representation on the server endpoint by prober content negotiation.
The main reason, why I dropped SSE it the lack of proper back pressure, i.e. what happens when a consumes slower than the server produces messages.
Plus, it is quite hard to debug SSE connections, e. g. no support by Postman and other dev tools. And long lasting HTTP connections are still a problem in todays infrastructure. E. g. there is currently no support for SSE endpoints in Digital Ocean App Platform, and I am not sure about them in Google Cloud Run.
I'm not entirely sure what you mean by this. SSEs are just normal GET requests with a custom header and some formal logic around retries. I've even implemented them manually with PHP using the `retry` command to mean I don't need to have the connection open for longer than a normal pageload.
> The main reason, why I dropped SSE it the lack of proper back pressure, i.e. what happens when a consumes slower than the server produces messages.
Could you point me to where the spec handles this please? As far as I can tell it has the same problem of the server needing to buffer events until the client next connects
Thank you for writing this great spec up for others to use!
I think totally right that back-pressure and plain GETs are an important use-case to support, and am really happy to see a beautiful spec written up to articulate concretely how to support them.
It is also great to be able to switch amongst these methods of subscription, for instance, if your server can keep a persistent connection open, it's nice to be able to get realtime updates over a single channel, but to still be able to fall back to polling or long-polling if you can't. And if you switch between a polling and a subscription, it's nice if you don't have to change the entire protocol — but can just change the subscription method.
Maybe you'd be interested in incorporating your experience, use-cases, and design decisions into that effort? We have been talking about starting at this November's IETF. [1]
For instance, you can do polling over the Braid protocol [2] with a sequence of GETs, where you specify the version you are coming from in the Parents: header:
GET /foo
Parents: "1"
GET /foo
Parents: "2"
GET /foo
Parents: "3"
Each response would include the next version.
And you can get back-pressure over Braid by disconnecting a subscription when your client gets overwhelmed, and then reconnecting again later on with a new Parents: header:
GET /foo
Subscribe: true
Parents: "3"
..25 of updates flow back to the client..
**client disconnects**
Now the client can reconnect whenever it's ready for new data:
GET /foo
Subscribe: true
Parents: "28"
Or if it wants to re-fetch an old version, it can simply ask for it via the version ID it got:
GET /foo
Version: "14"
And if the source for these updates is a git repository, we could use a SHA hash for the version instead of an integer:
GET /foo
Version: "eac8eb8cb2f21c5e79c305c738aa8a8171391b36"
Parents: "8f8bfc8ea356d929135d5c3f8cb891031d1539bd"
There's a magical universality to the basic concepts at play here!
OpenJDK builds by Oracle are updated only for 6 months, even for LTS versions.
Plus these builds are provided for limited platforms only and have no official ready-to-use Docker images.