QNAP had a minor version upgrade a while back that re-enabled auto-updates that had explicitly been disabled and there was a major version update (v4 to v5) not too long after that. I was intentionally holding off on upgrading to v5 because there was a Samba related issue that broke my primary workload. I almost got burned by it. Yesterday I got an email from one of my QNAPs saying an automatic firmware update was scheduled for 00:00 today even though I have auto-updates turned off.
I don't think it actually updated today, but I don't think they have their act together when it comes to managing updates and I'm not willing to depend on any of their stuff.
Synology is more complicated and ultimately comes down to the use of BTRFS. I don't know a ton about filesystems, but, the way I understand it, BTRFS allocates extents and then puts blocks into those extents. Depending on your workload, you can end up with orphaned blocks in those extents that prevent space reclamation (because it reclaims extents, not blocks) and it can result in runaway space usage. Search for "BTRFS missing space".
I may not have gotten that 100% correct, but I think the basic idea is close.
My workload (backup storage) overwrites random blocks in existing files and that's one of the scenarios that exacerbates the issue. I've ended up with empty LUNs on a Synology that are "using" TBs of space on the containing volume.
The Synology can also have a pretty complicated "stack" by the time you get your data onto it. I think I had an image based LUN on a BTRFS volume on mdadm RAID1. That was achieved through the GUI without making any crazy choices AFAIK.
I went all-in on Synology a few years ago. I can manage a bunch of drives myself, but that's not how I want to spend my free time. The Synology just sits in a corner of the house, doing its thing 24/7/365 without me futzing around with it.
My first (childhood) computer was a 486 (DX I think?). Pentiums were already out, even PIIs maybe. The LEDs on the front (remember those?) said "100", but I was always suspicious of these, because my computer seemed a _lot_ slower (at playing games – what else) than my friend's 133MHz... But his was a Pentium(!), so I never got to the bottom of it.
For the record I needed to downscale Doom to about a 1/2 window (running in DOS) to have it run at a decent framerate. Can't have been 100MHz, right? Does anyone here have a comparable benchmark of a "true" 100MHz 486 I can compare with?
Thus began my fascination with computers. Booting up DOS in some low-memory mode in order to squeeze every CPU cycle out of that thing. Working within constraints taught me a lot.
I didn't have internet at the time, but this thing had a 14.4k modem that I tried to get running. When the modem was in use, the mouse froze (and vice-versa). They were on the same IRQ interrupt jumper I think. I ended up frying the motherboard trying to fix this issue. I didn't have a computer for a while, but after lots of pleading and my parents seeing that I was serious about this, they eventually got me a Pentium 2 (450MHz or so!). But alas, it had a Voodoo Banshee video card (which had notoriously-bad drivers which often simply hard crashed). Alas, working within another constraint...
Pentium was superscalar and could often run two operations per cycle. The difference between a 133mhz pentium and a 100mhz 486 was way more than just 33mhz. That said I am pretty sure I recall running doom with more than a half window if not a almost full window on a 486.
I only have my unreliable memory to say, but on a 486DX/33 I recall play Doom 2 at least on the second-from-top view size without noticeable lag. But what sound card did you have? In the late 90s hardware manufacturers made budget cards with processing being done in software rather than dedicated chips. If you had one of those it may have been too much for a 486 to handle. Also probably the cause of your modem problems, as "soft modems" were notorious for being unreliable.
Obviously these cards were meant for use in Pentiums with their extra processing power. But some stingy integrators coughPackardBellcough would slap together the lowest cost components without regard to if it'd actually work properly.
> My first (childhood) computer was a 486 (DX I think?). Pentiums were already out, even PIIs maybe. The LEDs on the front (remember those?) said "100", but I was always suspicious of these, because my computer seemed a _lot_ slower (at playing games – what else) than my friend's 133MHz... But his was a Pentium(!), so I never got to the bottom of it.
A Pentium is much faster than a 486 at the same clock speed, so that shouldn't be surprising.
I mean, I know he's a bit of an odd character and a showman, but do you actually think he's that "out of his depth" that he's desperately trying to crib other people's code? He's accomplished some pretty impressive things in a pretty short career – for now I would take him at face value re: asking for code as a clever recruiting strategy.
I've got to admit I have no idea how his experience is relevant to Twitter. But apparently he's working for free. (Well, cost of living in San Francisco, which is not nothing, but it's not gaining him financially.) I'm not surprised Elon is happy to hire someone of Hotz's fame and skill as an intern.
Yeah, it seems like it would be possible for the DB engine to aggregate all these increments into one update. If you have two increments by one each in the queue, why not make it a single increment by two? I'm not sure though how much computing power it would need to figure that out...
Not necessarily. If both updates are in a single transaction then it's valid for the query planner to batch them, although that seems unlikely in the use case this table layout is designed for.
I've been using it in prod for almost 10 years now, almost without issues. Personally I'm still a very happy customer. FWIW, it's a low-traffic app, so cost isn't an issue. I'm happy to pay Heroku a bit more for the developer ergonomics over running my own server – it has saved me a ton of time.
Luckily I was not affected by the recent "issues", since I'm not using the Github integration.