> VM to only run a browser in there, to keep the memory under control
For other Linux users out there — a VM is not needed for this, use a cgroup with memory limits. It's very easy to do with systemd, but can be done without it:
The kernel will prevent Firefox from using more than 2 GiBs of RAM by forcing it into swap (including all child processes). To quote systemd.resource-control(5):
> Specify the throttling limit on memory usage of the executed processes in this unit. Memory usage may go above the limit if unavoidable, but the processes are heavily slowed down and memory is taken away aggressively in such cases. This is the main mechanism to control memory usage of a unit.
If you'd rather have it OOMed, use MemoryMax=2G.
It's actually very useful for torrent clients. If you seed terabytes of data (like I do), the client quickly forces out more useful data out of the page cache. Even if you have dozens of gigabytes of RAM, the machine can get pretty slow. This prevents the client from doing that.
There are lots of other interesting controllers that can put limits on disk and network I/O, CPU usage, etc.
Wayland support didn't make it... Oh well it is in version 29.
I've been using the wayland version with libgccjit many months now from their git repo and it is extremely snappy and stable editor.
My strategy to keep all of this together is a nix derivation that compiles the latest master branch with all the plugins. Oh and my config is an org file with clear comments...
Btw. I recommend SystemCrafters video series Emacs from scratch. It teaches how to make a vanilla emacs to work like doom emacs does. It was helpful for me to understand the magic behind doom...
The primary difference is that in Nix it's fairly easy to wrap arbitrary build systems, so you can inject third-party dependencies into your codebase without vendoring and adapting their build system (plus, through nixpkgs[0], the majority of all relevant software has already been wrapped).
In Bazel you have to do a lot of that work yourself. Google has essentially ~unlimited manpower to do this for the things in their third_party tree, but for any other size of organisation this is not the case and people resort to all sorts of ugly hacks.
Depending on your needs these different types of projects can also coexist with each other in Nix. To use an example from my work, we have a Nix-native build system for Go[1] and code using this[2] co-exists with code that just uses the standard Go build system[3]. Both methods end up being addressable on the same abstraction level, meaning that they both turn into equivalent build targets (you can see both in a full build[4] of our repo).
And for what it's worth, some of the things Bazel gets extremely right (such as having a straightforward mapping from code location to build target, universal build command) are pretty easy to do in Nix (see readTree[5], magrathea[6]).
Tesla's not trying to reduce the number of cars, it's trying to replace the existing fleet with electric. This will likely increase consumption of electricity.
Wider adoption of bitcoin will likewise increase the consumption of electricity.
Electricity production (not consumption) is what causes emissions. More consumption will increase the demand for production, which will increase investment in production.
Given that we're in the middle of a green energy transition, increasing investment in production will likely accelerate that transition -- the stated goal of Tesla.
Since everyone is recommending other resources, here's some C developers whose code I enjoy very much (or just really like the software they created with C):
* A Zero W hooked up to a PM2.5 to do air quality monitoring in the house. Just bought a couple more sensors for it (VOC, eCO2, etc), but haven't hooked them up yet.
* A 3B+ running the UniFi controller for my home network.
* One is running a custom Hue automation I built to shift the color temperature of the lights throughout the day.
* One is built into an internet connected dog treat dispenser I built as a gift.
* A rather dusty Pi is running CNCjs so I can have a decent interface to my cheap grbl CNC.
* And finally I have a Pi running OctoPrint for my 3D printer.
And that's just the ones currently running. I've got two more in progress. One to automate an exhaust fan based on inside and outside temperatures. Another is destined for the garage where it will replace the not-so-great MyQ "smart" functionality of the garage door opener.
To each their own I suppose, but I've been consuming RasPis like candy. $60 all-in gets you a fairly beefy platform with almost all the I/O you could require and a vast ecosystem of software and HATs. Honestly their only downside is that at some point I'll have to reconfigure my home network when I start exhausting my current internal /24 with 200 RasPis.
For other Linux users out there — a VM is not needed for this, use a cgroup with memory limits. It's very easy to do with systemd, but can be done without it:
The kernel will prevent Firefox from using more than 2 GiBs of RAM by forcing it into swap (including all child processes). To quote systemd.resource-control(5):> Specify the throttling limit on memory usage of the executed processes in this unit. Memory usage may go above the limit if unavoidable, but the processes are heavily slowed down and memory is taken away aggressively in such cases. This is the main mechanism to control memory usage of a unit.
If you'd rather have it OOMed, use MemoryMax=2G.
It's actually very useful for torrent clients. If you seed terabytes of data (like I do), the client quickly forces out more useful data out of the page cache. Even if you have dozens of gigabytes of RAM, the machine can get pretty slow. This prevents the client from doing that.
There are lots of other interesting controllers that can put limits on disk and network I/O, CPU usage, etc.