Tl;dr up front: For now. The UI absolutely isn't as fleshed out, but that's not the thing that makes this important.
Yeah, it's built primarily as a test environment for apps that require free-form multiwindow, multi-resume, and the like -- think foldables, mostly. So it's pretty skeletal at the moment. (Incidentally it's also hidden behind a developer flag.)
However, it's managed by the system launcher, meaning it can be extended by the community or by OEMs, and the Android team is actively encouraging people to do that. It also means that there's a set of standard interfaces for them to build this functionality on. The latest version of DeX was reworked to use it, LG rolled out an implementation with Android 10, and there have been rumors of Nvidia working on a transformable SHIELD device. (I would assume EMUI Desktop was also rebased, but haven't seen anything confirming that.) It's also starting to pop up in custom ROMs like BlissOS.
This has much bigger implications than DeX did (or foldables do), imo, for a few reasons:
- Google has been slowly building out broader, less hardware-dependent support for large form factors across a variety of projects - from ChromeOS' Android runtime (which will also benefit from more developers targeting this), to Fuchsia's scalable UI, to Flutter's recent web and desktop support, to PWAs (which are responsive by definition) and their integration into pretty much every major platform. The ecosystem around convergence across their products is actually a bunch of different pieces that are all evolving somewhat in parallel, all of which are somewhat symbiotic with one another, and many of which you can try out in some form right now.
- That said, this particular piece has a much lower barrier to entry. You don’t need a particular class of Android device or laptop, or a specific product, or a custom ROM, or even root. That doesn’t mean the average user will pick this up for a while, but the fact that the average user can guarantees that a bunch of tinkerers and power users already are. Something which never happened at scale for, say, Android tablets.
- This creates a lot more incentive to build for desktops, with the potential to then recycle that larger UI for form factors like tablets. Giving desktop UIs a standard base means that supporting, say, DeX no longer has to mean writing a vendor-specific implementation (see also: biometric sensors, styli, and various other hardware classes that the ecosystem has picked up over time, but only got broad third-party support after being added to AOSP), and the fact that it only requires a display output on your device enables you to target a much broader audience (even if your current audience is mostly just places like XDA).
Putting aside broader conversations about the kinds of use cases that could arise from convergent UIs, I’d argue that this is one of the biggest steps forward we’ve had in a while toward making that future actually happen.
11 is still early into its public testing phase -- developer previews are essentially pre-beta -- so it's hard to really make concrete guesses at what user-facing stuff will show up in the next version at this point.
Yeah, it's built primarily as a test environment for apps that require free-form multiwindow, multi-resume, and the like -- think foldables, mostly. So it's pretty skeletal at the moment. (Incidentally it's also hidden behind a developer flag.)
However, it's managed by the system launcher, meaning it can be extended by the community or by OEMs, and the Android team is actively encouraging people to do that. It also means that there's a set of standard interfaces for them to build this functionality on. The latest version of DeX was reworked to use it, LG rolled out an implementation with Android 10, and there have been rumors of Nvidia working on a transformable SHIELD device. (I would assume EMUI Desktop was also rebased, but haven't seen anything confirming that.) It's also starting to pop up in custom ROMs like BlissOS.
This has much bigger implications than DeX did (or foldables do), imo, for a few reasons:
- Google has been slowly building out broader, less hardware-dependent support for large form factors across a variety of projects - from ChromeOS' Android runtime (which will also benefit from more developers targeting this), to Fuchsia's scalable UI, to Flutter's recent web and desktop support, to PWAs (which are responsive by definition) and their integration into pretty much every major platform. The ecosystem around convergence across their products is actually a bunch of different pieces that are all evolving somewhat in parallel, all of which are somewhat symbiotic with one another, and many of which you can try out in some form right now.
- That said, this particular piece has a much lower barrier to entry. You don’t need a particular class of Android device or laptop, or a specific product, or a custom ROM, or even root. That doesn’t mean the average user will pick this up for a while, but the fact that the average user can guarantees that a bunch of tinkerers and power users already are. Something which never happened at scale for, say, Android tablets.
- This creates a lot more incentive to build for desktops, with the potential to then recycle that larger UI for form factors like tablets. Giving desktop UIs a standard base means that supporting, say, DeX no longer has to mean writing a vendor-specific implementation (see also: biometric sensors, styli, and various other hardware classes that the ecosystem has picked up over time, but only got broad third-party support after being added to AOSP), and the fact that it only requires a display output on your device enables you to target a much broader audience (even if your current audience is mostly just places like XDA).
Putting aside broader conversations about the kinds of use cases that could arise from convergent UIs, I’d argue that this is one of the biggest steps forward we’ve had in a while toward making that future actually happen.