Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Almost every time I see a discussion about LiveView there’s someone complaining about the issue of latency/lag, and how it makes LiveView unsuitable for real-world applications.

From what I understand, the issue is that every event that happens on the client (say, a click) has to make a roundtrip to the server before the UI can be updated. If latency is high, this can make for a poor user experience, the argument goes.

As the creator of LiveView, what’s your take on this? Is it a real and difficult-to-solve issue, or do people just not see "the LiveView way" of solving it?

I think LiveView looks amazing, but this possible issue (in addition to chronic lack of time) has made me a little unsure of whether it’s ready to use for a real project.

Thanks for creating Phoenix, btw!



These kinds of discussions miss a ton of nuance unfortunately (as most tech discussions do), so hopefully I can help answer this broadly:

First off, it's important to call out how LiveView's docs recommend folks keep interactions purely client side for purely client side interactions: https://hexdocs.pm/phoenix_live_view/Phoenix.LiveView.html#m...

> There are also use cases which are a bad fit for LiveView:

> Animations - animations, menus, and general UI events that do not need the server in the first place are a bad fit for LiveView. Those can be achieved without LiveView in multiple ways, such as with CSS and CSS transitions, using LiveView hooks, or even integrating with UI toolkits designed for this purpose, such as Bootstrap, Alpine.JS, and similar

Second, it's important to call out how LiveView will beat client-side apps that necessarily needs to talk to the server to perform writes or reads because we already have the connection established and there's less overhead on the other side since we don't need to fetch the world, and we send less data as the result of the interaction. If you click "post tweet", wether it's LiveView or React, you're talking to the server so there's no more or less suitability there compared to an SPA.

I had a big writeup about these points on the DockYard blog for those interested in this kind of thing along with LiveViews optimistic UI features:

https://dockyard.com/blog/2020/12/21/optimizing-user-experie...


Thanks for the pointers and insights. I’ve been reading up on this tonight (local time), and this whole issue seems to be mostly a misconception.

Between things like phx-disable-with and phx-*-loading, and the ability to add any client-side logic using JS, there doesn’t really seem to be any limitations compared to a more traditional SPA using (for example) React and a JSON API.

I hope I haven’t added to the confusion about this by bringing it up, I was just very curious to hear your thoughts on it.


I think the big difference is that with React a lot of interactions can be completed completely client side, with the server side component happening only after the fact (asynchronously).

I’ll grant you that that isn’t often the case, and recovering from inconsistencies is pretty painful, but I can see how people would go for that.

I kind of like the idea I can just build all my code in one place instead of completely separate front and back-end though.


> LiveView will beat client-side apps that necessarily needs to talk to the server to perform writes or reads because we already have the connection established and there's less overhead

Don't modern browsers already share a TCP connection for multiple queries implicitly?


Yeah. The overhead I see that's being reduced from a performance point of view is the server not needing to query the session/user information on every message, compared to ajax. That's true for websockets in general. And then the responses might be slightly smaller because it is an efficient diff with just the changed information.


There's a funny story here. We created Fly.io, Chris created Phoenix. We met earlier this year and realized we'd accidentally built complimentary tools. The pithy answer is now "just deploy LiveView apps close to users". If a message round trip (over a pre-established websocket) takes <50ms it seems instantaneous.

This means moving logic to client side JS becomes an optimization, rather than a requirement. You can naively build LiveView and send stuff you shouldn't to the server, then optimize that away by writing javascript as your app matures.

What I've found is that I don't get to the optimize with JS step very often. But I know it's there if I need it.


How would you exactly 'optimize with JS'? Do you think this optimization can be done to the extent of enabling offline experiences? Might not be full functionality, but bookmarks/saved articles, for example.


Lots of answers here including one from Chris McCord himself, but I'll offer my take based on my professional experience developing web apps (though I've never used Phoenix professionally):

A large majority of businesses out there start off targeting one region/area/country. The latency from LiveView in this scenario is imperceptible (it's microseconds). If these businesses are so lucky as to expand internationally, they are going to want to deploy instances of their apps closer to their users regardless of wether or not they are using LiveView.

LiveView could be a huge help to these startups. The development speed to get a concurrent, SPA-style app up and running is unparalleled, and it scales really well. My guess would that be that people who are worried about the latency here (which is going to exist from any SPA anyway) are the ones who are developing personal pages, blogs, educational material, etc. that they are hoping the world is going to see out of the gates. In this case, LiveView is not the answer!!! And as I've stated elsewhere 'round here, LiveView does not claim to be "one-size-fits-all". If latency really IS that big of a concern, LiveView is not the right choice for your app. But there really is a huge set of businesses that could really benefit from using it, either because they are a start-up focused on a single area/region/country, or they are already making tons of money can easily afford to "just deploy closer to their users" and could benefit from LiveView's (and Phoenix's) extreme simplicity.


Pretty much this. Also, I’m not sure most people realise how incremental LiveView can be. You can use it for a little widget on any page and later swap in a react component if you truly need one (which most apps probably don’t).

It’s not designed to run the NY Times. But it is a super useful tool that will benefit a ton of apps out there.


Is microseconds correct? Even with a good connection in online games I’ve only seen ping latencies of 3ms or so, and a more common range on an average connection is 20ms-50ms.


Should be, though milage may vary, of course. I'm having trouble finding a better example but https://elixir-console-wye.herokuapp.com/ is made in LiveView. You can try it out and see what you get (I have no idea where it's deployed, it's a phoenixphrenzy.com winner and plenty more there to browse through). Its payloads are a bit larger than some typical responses I have in my personal apps and I'm seeing 1-2ms responses in Toronto, Canada (chrome devtools doesn't show greater precision than 0.001 for websocket requests).


ms is milliseconds


Yep, which is what I meant in my comment you're replying to (as per my statement that devtools only report to the 0.001). But as pointed out by jclem, I'm probably wrong about microsecond response times anyway. I'm very likely thinking about the MICROseconds I see in dev, which of course doesn't count :) But with the heroku link above, I am seeing as low as 1-3 MILLIseconds in Toronto, Canada.


One light-microsecond is about 300 meters, this must be milliseconds.

Edit: Just saw that this was already pointed out. Apologies, didn’t mean to pile on.


I pointed out below that I actually DID mean microseconds but likely skewed by times I was seeing in dev. Hopefully it does not take away from my point that response times are still imperceptible when in roughly the same region (I'm seeing 1-3 milliseconds in the heroku-hosted LiveView app I linked below).


For a lot of the LiveView applications that I write (which is actually quite a few these days), I will usually lean on something like AlpineJS for frontend specific interactions, and my LiveView state is for things that require backend state.

For example, if I have a flag to show/hide a modal to confirm a resource delete, the show/hide flag would live in AlpineJS, while the resource I was deleting would live in the state of my LiveView.

This way, there are no round trips to the server over websocket to toggle the modal. Hopefully that example makes sense :).


I'm surprised to see so few mentions of AlpineJS. Personally, PETAL has become my de facto stack.


The main thing that's kept me from using Alpine in my serious projects is that it doesn't work with a strict CSP.


What is PETAL?


Phoenix, Elixir, Tailwind, Alpine, and LiveView.

https://changelog.com/posts/petal-the-end-to-end-web-stack


The PHP equivalent would be the TALL stack (Tailwind, AlpineJS, Laravel and Livewire). Although Livewire just communicates over AJAX. The original Websockets version didn't make it.

I just found out that Livewire was inspired by LiveView.


It’s telling that every answer is “just deploy servers near your users.”

One of YouTube’s most pivotal moments was when they saw their latency skyrocketed. They couldn’t figure out why.

Until someone realized it was because their users, for the first time, were world wide. The Brazilians were causing their latency charts to go from a nice <300ms average to >1.5s average. Yet obviously that was a great thing, because of Brazilians want your product so badly they’re willing to wait 1.5s every click, you’re probably on to something.

Mark my words: if elixir takes off, someday someone is going to write the equivalent of how gamedevs solve this problem: client side logic to extrapolate instantaneous changes + server side rollback if the client gets out of sync.

Or they won’t, and everyone will just assume 50ms is all you need. :)


> It’s telling that every answer is “just deploy servers near your users.”

This isn't the takeaway at all. The takeaway is we can match or beat SPAs that necessarily have to talk to the server anyway, which covers a massive class of applications. You'd deploy your SPA driven app close to users for the same reason you'd deploy your LiveView application, or your assets – reducing the speed of light distance provides better UX. It's just that most platforms outside of Elixir have no distribution story, so being close to users involves way more operation and code level concerns and becomes a non-starter. Deploying LiveView close to users is like deploying your game server closes to users – we have real, actual running code for that user so we can do all kinds of interesting things being near to them.

The way we write applications lends itself to being close to users.


Imagine how painful HN would be if you upvoted someone and didn’t see the arrow vanish till the server responded. Instead of knowing instantly whether you missed the button, you’d end up habitually tapping it twice. (Better to do that than to wait and go “hmm, did I hit the button? Oh wait, my train is going through a tunnel…)

Imagine how painful typing would be if you had to wait after each keypress till the server acknowledged it. Everyone’s had the experience of being SSH’ed into a mostly-frozen server; good luck typing on a phone keyboard instead of a real keyboard without typo’ing your buffered keys.

The point is, there are many application-specific areas that client side prediction is necessary. Taking a hardline stance of “just deploy closer servers” will only handicap elixir in the long run.

Why not tackle the problem head-on? Unreal Engine might be worth studying here: https://docs.unrealengine.com/udk/Three/NetworkingOverview.h...

One could imagine a “client eval” code block in elixir which only executes on the client, and which contains all the prediction logic.


You'd use the optimistic UI features that LiveView ships with out of the box to handle the arrow click, and you wouldn't await a server round-trip for each keypress, so again that's now how LiveView form input works. For posterity, I linked another blog where I talk exactly about these kinds of things, including optimistic UI and "controlled inputs" for the keyboard scenario: https://dockyard.com/blog/2020/12/21/optimizing-user-experie...

While we can draw parallels to game servers being near users, I don't think it makes sense for us to argue that LiveView should take the same architecture as an FPS :)


> Deploying LiveView close to users is like deploying your game server closes to users – we have real, actual running code for that user so we can do all kinds of interesting things being near to them.

Then why do you start running forward instantly when you press “W” in counterstrike or quake? Why not just deploy servers closer to users?

Gamedev and webdev are more closely related than they seem. Now that webdev is getting closer, it might be good to take advantage of gamedev’s prior art in this domain.

There’s a reason us gamedevs go through the trouble. That pesky speed of light isn’t instant. Pressing “w” (or tapping a button) isn’t instant either, but it may as well be.


> Then why do you start running forward instantly when you press “W” in counterstrike or quake? Why not just deploy servers closer to users?

You do both? Game client handles movements and writes game state changes to a server, which should be close to the user to reduce the possibility for invalid state behaviors? You really haven't seen online games that deploy servers all over the world to reduce latency for their users? What?

Both web apps and games do optimistic server writes. Both web apps and games have to accommodate a failed write. Both web apps and games handle local state and remote state differently.


I read his post as a criticism of how little optimistic updating is done in web apps, and how bad the user story is. Why can't it be easy to build every app as a collaborative editing tool without writing your own OT or CRDT?


Because an occasional glitch when the client & server sync back up is acceptable in a game. Finding out that my order didn't actually go through is much worse. Especially since click button, see success, and close browser is an relatively common use case.


Consider these two scenarios.

1. SPA with asynchronous server communication. A button switches to a spinner the moment you click it, and spins until the update is safe at the server. Error messages can show up near the button, or in a toast.

2. LiveView where updates go via the server. The button shows no change (after recovering from submit "bounce" animation) until a response from the server has come back to you. To do anything better, you need to write it yourself, and now you're back in SPA world again.

There's a reason textarea input isn't sent to a server with the server updating the visible contents! Same thing applies to all aspects of the UX.

EDIT: https://dockyard.com/blogs/optimizing-user-experience-with-l... talks about this. That'll handle things like buttons being disabled while a request is in flight, but it won't e.g. immediately add new TODO items to the classic TODO list example.


That's a deliberate UI choice, though, and it doesn't always make sense in non-transactional workflows. It's easy to wait for Google Docs to say "Saved to Drive", and going to a new page to save a document would be really disruptive to your workflow, for example.


I remember this story but can't find it anywhere. If I recall correctly they deployed a fix that decreased the payload size. However, in doing so they actually opened the door to users with slow connections that were unable to use it at all before. So measured latency actually went up instead of down.


That’s the one! Where the heck is it? It’s one of my all time favorite stories, but it seems impossible to find; thanks for the details.



YES! Thank you! I’ve seriously been searching for like five decades. What was the magical search phrase? “YouTube Brazil increase latency” came back with “How YouTube radicalized Brazil” and other such stories. (Turns out the article mentions “South America” rather than “Brazil”; guess my Dota instincts kicked in.)

Anyway, you rock. :)


Thank you! It was impossible to find anything on Google since any variant of "youtube", "latency" etc showed results for problems with YouTube or actual YouTube videos talking about latency.

The trick was to use HN search: "youtube latency" and select Comments. First result was a comment on https://www.forrestthewoods.com/blog/my_favorite_paradox/ which links the story in the "Bonus!" section.


> Mark my words: if elixir takes off, someday someone is going to write the equivalent of how gamedevs solve this problem: client side logic to extrapolate instantaneous changes + server side rollback if the client gets out of sync.

most games have the benefit that they're modeling the mechanics of physical objects moving around in the world and are having their users express their intentions through spatial movement. the first gives a pretty healthy prior in terms of modeling movement when data drops out and the latter can be fairly repetitive and thereby learnable and predictable.

whether or not user interaction behaviors can be learned within the context of driving web applications seems a little less clear, to me at least. it does seem like there are a lot more degrees of freedom.


Nothing so complicated. All that's needed is a local cache so that when you type a new message in the chat window, you immediately see it appear when you hit submit (optionally with an indication of when the message was received by the peer). But there's quite a bit of tooling required to reliably update the local cache, run the code both in the client and on the server.


Firebase does this brilliantly with Firestore queries. Any data mutation by the client shows up in persistent searches immediately, flagged as tentative until server acknowledges.


> server side rollback

server controlled client side rollback, you mean?


On modern internet, with some assumption, you can get to like 2x faster(in my case) when sending data over an *already establised* connection.

Example:

A full fresh HTTP connect from client to first byte take ~400ms(I'm in the US the server is in Europe). This includes: resolve dns, open tcp connection, ssl handshake etc...

But if the connection is already establish, it only takes ~200ms to first byte.

If I deployed the server in the same region, say US customer <-> US server, this came down to 20ms...

That means, it's good enough.

Not super ideal but It's a trade-off we're willing to make.


LiveView (I think) already achieves this optimisation as both the initial content and any future updates come over the same persistent websocket connection.


Not OP, but:

> If latency is high, this can make for a poor user experience, the argument goes.

Deploy auto-scaling servers closer to your users: Use fly.io (or any other competent edge platform, really).


A better balance would be to build the webapp in hybrid mode, some logic can be run by client-side javascript. Only the event handlers that rely on data from the server needed to be sent to the server.

In this pixel paint demo, the state and reaction of "changing the pen color" can happens locally: https://github.com/beenotung/live-paint/blob/dd3b370/server/...


I had never used LiveView and tangential to the latency consideration here. Two applications that I think can be enabled by siphoning all events to server are server-side analytics and time travel debugging (or reconstruction of a session). I am so glad to learn of this tool and definitely giving a try in my next project


Here's a demo that illustrates that delay: https://breakoutex.tommasopifferi.com/

Agree it's super neat framework and hope a client side implementation can be written to "stage" the change.


Chris said in a different comment optimistic ui updates already exist…

Here’s the link: https://dockyard.com/blog/2020/12/21/optimizing-user-experie...


Here is an article about updating the UI without waiting for the roundtrip to the server.

http://blog.pthompson.org/liveview-tailwind-css-alpine-js-mo...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: