For many ephemeral workloads, sure, but that comes at the expense of generally worse and less consistent CPU performance.
There are plenty of workloads where I’d love to double the memory and halve the cores compared to what the memory-optimised R instances offer, or where I could further double the cores and halve the RAM from what the compute-optimised C instances can do.
“Serverless” options can provide that to an extent, but it’s no free lunch, especially in situations where performance is a large consideration. I’ve found some use cases where it was better to avoid AWS entirely and opt for dedicated options elsewhere. AWS is remarkably uncompetitive in some use cases.
I like that https://discordstatus.com/ shows the API response times as well. There's times where Discord will seem to have issues, and those correlate very well with increased API response times usually.
Reddit Status used to show API response times way back in the day as well when I used to use the site, but they've really watered it down since then. Everything that goes there needs to be manually put in now AFAIK. Not to mention that one of the few sections is for "ads.reddit.com", classic.
I set up Immich last week and I absolutely love it. Docker is my "happy place" and I found the setup pretty straightforward, though it does have some rough edges that I anticipate will be sorted out as the project continues to mature.
I showed Immich to my partner and they loved it so much that we've ordered significantly more storage for the server to accommodate it. We're currently using both Google Photos and OneDrive, but with this we'll be ditching OneDrive and filling that niche with Immich (as well as expanded network storage in general).
The website and documentation is super clear about not using it as the only source of photos. This is why we'll keep using Google Photos, and why I'll also be backing up Immich and portions of the network storage to B2 via restic. I've used this snapshotting pattern for my general server data for years, and it's even saved me a couple of times. Backups are something you hope to never need to use, but boy are they satisfying when you do need to use them and have them set up properly!
This is similar to my setup and experience as well. I have a multi-purpose server running Samba with 10 Gbps SFP+. I have 2.5 GbE on my desktop and easily saturate that when transferring files.
I even use the network for stuff like old game ROMs and ISOs for emulation. The random access times are still magnitudes better than I’d be able to get with any HDD.
> I have a multi-purpose server running Samba with 10 Gbps SFP+.
Yeah, ditto. My desktop and switches along the path to the server/router also have 10gbit fiber connections.
Stupid question: Are you aware of the 10Gtek company? If you're unaware, my experience over the past many years is that they sell inexpensive SFP+ modules that work just fine... so if you're ever in the market for more modules, give them a try (if you haven't already).
I don't see a need for it yet though. I'm a really heavy user (it specialist with more than a hundred devices in my networks) and I really don't need it.
These things are nice-to-have until they become sufficiently widespread that typical consumer applications start to require the bandwidth. That comes much later.
E.g.: 8K 60 fps video streaming benefits from data rates up to about 1 Gbps in a noticeable way, but that's at least a decade away form mainstream availability.
The other side of this particular coin is, when such bandwidth is widely available, suddenly a lot of apps that have worked just fine are now eating it up. I'm not looking forward to 9 gigabyte Webpack 2036 bundles everywhere :V
I set up a self-hosted FreshRSS instance as well, and I feel the same benefits from it. I got back into RSS after Reddit pushed its API changes through, and it’s been so refreshing. I’m using social media a lot less now and those tingly FOMO feelings all but disappeared.
You can engine brake with a lot of automatic vehicles as well, since they’ll still have semi-automatic modes. I drive a car with a CVT and I’m able to “down-shift” for engine braking by using fake gears at specific ratios.
I feel like the distinction here is that you have to go out of your way to do this with ICE vehicles? Maybe I’m in the minority then, but I never do this with my car (which has CVT).
I tried it a few times, thought it was a gimmick and now I just use gas or brake.
There are plenty of workloads where I’d love to double the memory and halve the cores compared to what the memory-optimised R instances offer, or where I could further double the cores and halve the RAM from what the compute-optimised C instances can do.
“Serverless” options can provide that to an extent, but it’s no free lunch, especially in situations where performance is a large consideration. I’ve found some use cases where it was better to avoid AWS entirely and opt for dedicated options elsewhere. AWS is remarkably uncompetitive in some use cases.