Someone mentioned waypipe which I don't know but which seems to be the way to go if one can use it (it does seem to require installing waypipe on the server).
In any case ssh -X still works fine out of the box, using which transparently uses XWayland behind the scene.
It has limitations compared to native Wayland like good support for multi screen with different DPIs, but the same limitations will be found in a native X11 session anyway.
So Wayland is at least not a downgrade on this topic. It's been handled well.
It still has abysmal performance (and many other issues), though. I've been using Xpra for a while now, to at least let programs survive if my connection is interrupted. Makes it possible to work from a train or conference Wi-Fi.
waypipe has good performance, but IIRC it suffers from the same disconnection issue. Hopefully recent work from KDE on surviving a compositor restart means that restarting a waypipe session will be possible, or even juste deciding to forward a program after starting it (which wouldn't be possible with X).
One could also imagine exposing a window to both local and remote compositors (could be implemented in waypipe), with simultaneous access leveraging Wayland multiseat. "Multiplayer" compositing.
Forwarding after starting program or disconnecting would be entirely possible with X if toolkits supported it. I wrote a tool once which this for my own work.
I also use ssh -X a lot from home to work and performance is sufficient. Xpra shows that it could also work over low latency links if you have latency hiding which the X supports but because it is asynchronous but - again - toolkits never bothered. Fixing these issues would be a million times more useful than redeveloping everything from scratch.
> Forwarding after starting program or disconnecting would be entirely possible with X if toolkits supported it.
Sure, in that case you could even switch back-ends and have your toolkit interrupt its connection with X and start talking with GBM/DRM, Wayland, or another X server. However, the required changes would likely be invasive. I get your point though, this is relatively similar to Wayland programs surviving compositor restarts. However, you would need to start tracking all the state that the X server tracks, to replay it later. Not impossible, but quite hard to bottle on existing implementations, I think.
> Fixing these issues would be a million times more useful than redeveloping everything from scratch
That's your opinion. In my opinion, Wayland does a lot of things right; the main one being a specification that everyone can implement. That makes it much easier to start from scratch to implement an innovative feature in a proof-of-concept toy compositor or program. We've seen a lot of these projects, and consolidation takes time. Once proven, features tend to trickle down to general purpose compositors.
I am really satisfied with the tools Wayland has given us, from gamescope to nested compositors, better isolation, better-behaved clients (no more clients that refuse to go fullscreen, etc), a very stable experience with few crashes (compared to misbehaving clients taking down the X server with them), easy multiseat, no more tearing, no more xorg.conf, better client isolation (notably remote apps cannot spy on locally-running apps), pipewire-based screensharing, choose-your-own-compositor-features approach (granted, not for everyone; but those uncomfotable can stick with KDE or GNOME).
The future seems promising, mostly thanks to wlr protocols, especially wlr-layer-shell that should allow running UI elements from a DE on other compatible compositors (I can't wait to use xfce4-panel on sway starting from their next release[1]).
> I also use ssh -X a lot from home to work and performance is sufficient
Personally, I ran into issues with Cadence, even on a LAN: that software uses some X toolkit, and some very long lists (scrollable form-like dialog) took dozen of seconds to display; I also had various font issues, issues with software moving my mouse (I hate this), performance issues with complex drawings. Most of these disappeared when running through Xpra. Not to mention losing my work because of small internet cuts, putting my computer to sleep, or having the ssh connection interrupted somehow (ssh has trouble with roaming, connecting over wireguard helps with that).
> Xpra shows that it could also work over low latency links if you have latency hiding which the X supports but because it is asynchronous but - again - toolkits never bothered
This is getting too technical, I don't think I'm qualified to discuss this. Though don't you mean high latency links? Xpra is basically a local X server that sends data with a protocol similar to VNC. It is quite similar to how Waypipe handles things. I may be wrong on this, but I think RDP may combine the best of both worlds? Dumb "vnc-like" connection by default, and make use of optimized implementations in the toolkits when available.
>> Xpra shows that it could also work over low latency links if you have latency hiding which the X supports but because it is asynchronous but - again - toolkits never bothered
>This is getting too technical, I don't think I'm qualified to discuss this. Though don't you mean high latency links?
Yes, of course.
>Xpra is basically a local X server that sends data with a protocol similar to VNC. It is quite similar to how Waypipe handles things. I may be wrong on this, but I think RDP may combine the best of both worlds? Dumb "vnc-like" connection by default, and make use of optimized implementations in the toolkits when available.
I do not think you will every get good client integration as good as X with dumb protocols and my experience with RDP was always relatively poor. Xpra uses its own protocol between two proxys but supports good integration so is different to a stupid screen scraping approach, but I think it could just work by doing the latency handling on the client and speaking directly to a remote X server using X. The reason that I believe that would be possible is that X is a very flexible remote buffer handling protocol. So caching of some image content and copying it around could all be done remotely controlled by the client. I started to implement something like this but then did not have time... But the flexibility and extend-ability of X is also the reason I think that throwing it away is completely unnecessary.
Sort of. The foundational issue here is the longstanding premise "Wayland can replace X!". The problem is that Wayland can't replace X, only Wayland plus a bunch of other components - or as I like to call it, Wayland++.
So Wayland++ can provide network transparency, but whenever a W++ feature has issues and those issues are criticized, Wayland advocates will just motte-and-bailey the issue by saying "but that's not part of Wayland!", which is technically true but irrelevant. "Wayland" can mean Wayland++ or just Wayland-core, depending on what's convenient.
Wayland proper is a protocol specification. By it itself, it's completely inert and it's all up to an implementation.
The protocol uses shared memory buffers and file descriptors, so it can't be just transported through TCP as-is. You need something like waypipe, which parses part of the protocol, extracts things like file descriptors that won't make sense on the other end, and then reconstructs things on the destination.
waypipe turns out not to be that complicated, it's just 15K lines of code.
>Wayland proper is a protocol specification. By it itself, it's completely inert and it's all up to an implementation.
Wayland should have shipped with a default implementation that had screen sharing, recording, clipboard and everything else that x11 had by default. The fact that they've thrown all that responsibility on DEs without so much as a HOWTO on how to reach parity is ridiculous. I will never understand why anyone took their effort seriously.
> The fact that they've thrown all that responsibility on DEs without so much as a HOWTO on how to reach parity is ridiculous.
Well, all DEs (excepting maybe Xfce?) have members in the work groups that design the wayland protocol extensions, so it can be assumed that people are well aware of what needs to be done.
Wayland has a default implementation called Weston, but I'm not sure that any of its devs cared enough to implement the extensions which are responsible for all the other bits that you mentioned.
most software which did support network transparency in a way similar to what X11 started out with has giving up on it (like in the industry as a whole) and there seems to be a clear technical consensus that it's best to not to approach remote access this way, even in X11 it was kind of semi-abandoned long before X11 was semi abandoned (from developers not from people using it)
As far as I can tell from the POV of the discussion of weather Wayland (or any other hypothetical replacement) needs to support it the answer always had been a clear "no it doesn't need to, nor should it try to".
This doesn't mean that you can't have remote shared applications, desktops, screen sharing or similar just not using network transparency. I.e. not by pretending the things the application communicates with (compositor, GPU, etc.) are on the same computer and "transparently" routing (part of) them to a different computer. And if you consider the stark difference in latency, reliability and throughput between a Unix pipe/speaking to a GPU over PCIe and TCP over Ethernet it can feel surprising that it was ever considered to be a good idea (but then when X11 was build network transparency was just that big think people put into everything, from most of which it is removed by now).
So what replaces network transparency (and did replace it in many cases long before Wayland was relevant) is typical remote desktop functionality. I.e. and additional application will grab the mouse/keyboard input on one side and the rendered output on the other side and sends them to each other. This has many benefits both for the people not using it and the people using it while many of the drawbacks often practically do not make much of a difference anymore. The main issue is if there is a high quality open source for free program you can use and if it's installed on the system where you want to use it...
A server in a datacenter generally doesn’t have a GPU, certainly not enough to support thousands of clients (each of which does have a GPU plugged right into one user’s monitor). Software rendering is a regression that didn’t need to happen, and Javascript apps seem to be the way the industry is avoiding it (with the browser as a remote display server).
this use case is broken in X11 since a very long time, because to make this work well you don't just need some form of network transparency in the network manager but also remote rendering for OpenGL and Vulcan
> Software rendering is a regression t
But in most cases it's not happening, because you don't render on the server for most applications you render on a client which interacts with a server.
> and Javascript apps seem to be the way the industry is avoiding it (with the browser as a remote display server).
Today many JS apps are not thin clients they are often quite complete applications, but lets ignore that for a moment.
I'm not sure what exactly you are imagining, but as far as I can tell the only way to make this kind of remote rendering you are implying work in general would be by making X11 a GUI toolkit with some form of cross OS stable interface and it also would be the only supported GUI toolkit and any fancy GPU rendering (e.g. games) would fundamentally not work. There is just no way this would ever have worked.
The reason the industry mostly abandoned network transparency not just for remote display servers but also in most other places is because it's just not working well in practice. Even many of the places which do still use network transparency (e.g. network file systems) dent to run into unexpected issues due software happen to not work well with the changed reliability/latency/throughput characteristics this introduces.
> Software rendering is a regression that didn’t need to happen
Actually, it is. The actual straw that broke the X developers was font metrics, IIRC. Essentially, if you want to support fonts for the language of the most populous country on Earth, you need to do more or less complete font rendering to answer questions like "how long is this span of text going to be" (so that you can break it). And the X developers tried to make it work with the X model, but the only way they could get it to work well was to have the X server ship the font to the X client and the X client ships rendered bits back to the X server [even over the network!].
Sometimes because you want users to be able to change workstations, sometimes because you want a highly specific environment outside of the user's control (it can reset on each connection), sometimes because you want nothing to be kept locally. Eg, the country somebody works in is untrustworthy, so they access everything somewhere remote and safe.
virtual desktops on demand tends to be run on servers with GPUs and in general prefers server side GPU rendering because it's meant to work with any client which can access it even if it's has an extremely weak GPU
and if you have no complex rendering requirements then often it's a much better choice to place the network gap in the GUI toolkit instead of the DM as this tends to work way better, in this case you do need a thin client on the other side, but so do you need for X11 remote (the client needs to run X11) so it's kinda not that difference. And today the easiest way to ship thin clients happens to be JS/WebGPU, which is how we have stuff like GTKs webrender backend.