This is WSL2, right? I had the impression that WSL1 provided a better experience, but it was too complex to maintain and Microsoft decided to take the easy road.
Yes WSL2 is a mini-VM. Having used both, I don't think it's a slam dunk that 1 was better, really just different.
Particularly for the more complex cases of container APIs, GPU access, desktop integration, etc. Those are solved problems in the VM space and reinventing new solutions at a slightly different layer is not necessarily wise or realistic.
but also WSL2 doesn't add anything of value to the development experience on Windows that wasn't already possible with a VM.
For my use cases, I just want to target unix-like APIs and use bash-isms so I can reuse my build scripts and tooling. I don't really care if the binary format I consume or compile to is different to Linux - as long as my development experience is identical across MacOS, Windows and Linux.
A thin layer on top of Windows that mimics Linux so I can run bash _properly_ is all I really need.
The closest I've come is using Msys2 with zsh and uutils and it is so close but there are still cases where it simply doesn't work. WSL1 was pretty close but it falls short by needing remote development tools and having isolated storage/poor performance interacting with host storage.
WSL2 is DOA for me, I just hand roll my own Linux VM and call it a day.
The thinness is actually part of the problem. POSIX and Windows APIs don't work like each other.
For example, if you were a Unix archiver extracting a file, you'd call stat() open() write() close() chmod() utimes(). On Linux/BSD that's 1 file open/close, but on Windows, that's 4 file opens/closes because the Windows APIs need an open filehandle, while the POSIX APIs take a filepath. Linux/BSD have LRU caches of filepath->inode because they're designed around these APIs, Windows doesn't. Cygwin has to open/close a Windows filehandle each time because it can't make the calling code pass a filehandle instead.
So it may be more comfortable and familiar, but also Windows APIs are unsympathetic to code designed for Unix originally.
I do think that talk is way too kind to the windows design. they're trying to make the argument that windows filesystem isn't slow, but the talk is how fixing the problem took 3 months of optimization and direct help from Microsoft. all of this could be avoided if Microsoft made their filesystem not ridiculously slow
File system filters are pluggable kernel drivers. For example, ProcMon (Sysinternals tool) monitors file systems via a file system filter -- so fun fact, if you outright disable file system filters on a ReFS volume, you won't get any ProcMon results! This was a 'duh' moment for me.
I find the filesystem and network integration to be a lot nicer than I what I get from a Virtualbox VM. Having the WSL system able to see the host as /mnt/c and also have the WSL filesystems appear in Windows Explorer is pretty darn convenient.
I know conventional VMs do this kind of thing too, but I've always found it more awkward to set it up, you have to install special tools or kernel modules, etc. With WSL it just works out of the box.
pulling up terminal running WSL instead of running a VM is a superior experience to me when all I need is terminal coding and python/bash scripting without having to block off a chunk of RAM for a virtual machine
Yeah, WSL1 was a great idea but there were so many edge cases. e.g. allocating a gig of memory you’re not actually using is fast in Linux, not so on Windows.
> but it was too complex to maintain and Microsoft decided to take the easy road.
The insurmountable problem was file system semantics. In Linux, files are dumb and fast. On Windows, files are smarter and therefore also slower. Linux applications expect dumb fast files and they produce a lot of them. Mapping Linux file system calls to Windows file system calls is doable but you can't overcome that difference.
At that point, simply virtualizing the Linux kernel itself is an obvious better solution long term.
Filesystem access was slow. That meant that Git was slow. And every Linux techie thinks they're the next Torvalds and hacks on code and uses Git, so Git needed to work well.