I've had great luck with 10gtek modules both with Mikrotik gear, with DACs, and one that is connected to an upstream juniper switch. I'm curious what modules were the most troublesom.
* I will note that the 10gb sfp+ modules from 10gtek on a Mikrotik just don't work.
Funnily enough, this 10gtek worked on one of my 3 switches, but I could only establish a gigabit connection. I returned it [1]
These 10gtek fiber modules on the other hand have worked flawlessly so far. [2]
This Mikrotik module would not establish a 10 gb link with my Thunderbolt dock no matter what I tried. Works fine with my servers though so I swapped it out.
I've pretty much resigned myself to just buying the full brand Ubiqitui SFP+ adapters [4] for converting to copper.
I recently purchased [5] to run to my living room, but I have not found the time/energy to do the run.
I tried converting everything to copper as well but the copper DACs use a lot of power and ended up not working out due to the greatly increased power usage (mostly because the networking "closet" wasn't really designed for it). So beware if you're moving it to copper
Make sure you also pay attention to the distance rating of the SFP. I had a very similar experience with modules not working at the right speed sometimes. Turned out I was running 50 meters of cable over a 30 meter SFP. Got the correct one, and as low wattage rating as possible and it's been rock solid ever since.
btw, if you are using 10gbe copper modules, take a look at their temperature. some of mine were getting to 92C i think. had to put a bunch of heatsinks on them
I put a couple of Noctua NF-14 over the top ventilation holes in my rack, with the silicone mounting thingies and the NA-FC1 PWM controller. They are almost silent in winter. The switch with 10Gb copper is under the fans.
i opened switch and put noctua inside to cool sfp cage that i added heatsink to, in addition to heatsinks on sfp+ module itself. it dropped temperature from 92c to 75c. year later i replaced it with fiber run.
I have found that a fiber patch cable paired with two SFPs is cheaper and more power efficient than an equivalent UTP setup. This lead me to move to all fiber/DAC for 10Gb save for the 10Gb UTP link to my router that lacks SFP.
I have the 8 port 10Gb + 1 Gb from mikrotik and the UTP SFP's run stupid hot because they have to drive a cable at GHz speeds. The fiber and DAC (direct attach cable) SFPs are cool to the touch by comparison.
Newer units (not all) in the US come pre-charged up to a certain size of lineset. Manufacturers can sell you a whole unit with a charge. The rest is easy to source locally though I haven't tried to get nitrogen myself.
Of course you have exactly one chance with your install this way until you have to call someone.
There's a big difference between used as in I just bought this hard drive and have used it for a week in my home server, and used as in refurbished drive after years of hard labor in someone else's server farm
Enterprise drives are way different than anything consumer based. I wouldn't trust a consumer drive used for 2 years, but a true enteprise drive has like millions of hours left of it's life.
Quote from Toshiba's paper on this. [1]
Hard disk drives for enterprise server and storage usage (Enterprise Performance and Enterprise Capacity Drives) have MTTF of up
to 2 million hours, at 5 years warranty, 24/7 operation. Operational temperature range is limited, as the temperature in datacenters
is carefully controlled. These drives are rated for a workload of 550TB/year, which translates into a continuous data transfer rate of
17.5 Mbyte/s[3]. In contrast, desktop HDDs are designed for lower workloads and are not rated or qualified for 24/7 continuous
operation.
From Synology
With support for 550 TB/year workloads1 and rated for a 2.5 million hours mean time to failure (MTTF), HAS5300 SAS drives are built to deliver consistent and class-leading performance in the most intense environments. Persistent write cache technology further helps ensure data integrity for your mission-critical applications.
Take a look at backblaze data stats. Consumer drives are just as durable, if not more so than enterprise drives. The biggest thing you're getting with enterprise drives is a longer warranty.
If you're buying them from the second hand market, you don't likely get the warranty (and is likely why they're on the second hand market)
There isn’t a significant difference between “enterprise” and “consumer” in terms of fundamental characteristics. They have different firmware and warranties, usually disks are tested more methodically.
Max operating range is ~60C for spinning disks and ~70C for SSD. Optimal is <40-45C. The larger agents facilties afaik tend to run as hot as they can.
> drive has like millions of hours left of it's life.
It doesn't apply for the single drive, only for a large number of drives. E.g. if you have 100000 drives (2.4 million hours MTTF) in a server building with the required environmental conditions and maximum workload, be prepared to replace a drive once a day in average.
Drive failure rate versus age is a U-shaped curve. I wouldn't distrust a used drive with healthy performance and SMART parameters.
And you should use some form of redundancy/backups anyway. It's also a good idea to not use all disks from the same batch to avoid correlated failures.
MAUI Blazor Hybrid is great if you won't want to learn XAML. Apple killed Silverlight, Microsoft kept it running for ~20 years. If you stayed close to what Xamarin was the migration to MAUI isn't bad from what I've seen.
I wrote a little thing that connects to my IMAP server (I run my own E-mail), goes through the unread E-mails in the inbox, processes them (process MIME multipart, extract HTML, describe images and links, etc) and feeds them to an LLM with a prompt. The LLM decides if the message is spam or not.
It's amazingly accurate.
The interesting thing is that after experimentation I found that it's best if the prompt doesn't describe what is spam. The LLMs are somewhat "intelligent", so the prompt now describes me — who I am, what I do, my interests, etc. It's much more effective and generalizes better to fight new kinds of spam.
And a nice side observation is that this kind of system requires no training (so I no longer collect samples of spam) and can't be gamed, because it describes me instead of describing specific kinds of spam.
Visual Studio is a bad example. It's used for Windows, Web, and Mobile. The big difference between the two is the cost. Visual Studio Pro is $100/month, Enterprise is $300/month, while VSCode is free. It was an incredibly smart marketing play by Microsoft to do that.
At a call center, that had a whole datacenter in the basement. They had two weeks of fuel on hand at all times. Being on a border of a state, they also had a 2nd grid connection in case one failed.
The whole area lost power for weeks but gym was open 24/7 which became very busy during that time.
But that is the huge advantage of QUIC. It does NOT totally outcompete TCP traffic on links (we already have bittorrent over udp for that purpose). They redesigned the protocol 5 times or so to achieve that.
We have had great success with Maui Blazor Hybrid. Each release hot reload gets better with fewer edge cases. I also found that having a separate .cs file instead of in one file helps with highlighing and intellisense.