you could argue that this LLM's will benefit more users than the produced value itself lost on sea of SEO garbage. But my point is that the openness wont stop because people will still need to show their craft for better employment opportunities.
man I use C++ on personal projects for opencv (and python perfomance is pretty bad for my use case) but oh god, I don't I could ever get a C++ job, there is a universe of features I won't dare to touch anytime soon. I just use the "modern" and simple stuff and code look pretty good (imo), don't have to deal with gibberish compiler errors (except when std::format goes crazy). std::move is cool though, rvalue and lvalue is bit too much but the performance, clear "giving up" resource and dont having to use pointers is worth it (I just use it for rvalue passing, prob there is more to it).
You could get a C++ job. One of the issues with C++ is most people using it for anything useful only need to know some particular subset, only actually know that subset, and then get bit when the subset they know changes.
Sure, years ago. But today Ethernet is just as scammy as everyone else, we've been stuck at 1 Gbps on consumer grade hardware for more than 15 years. There are claims (unverified ofc) about their executives boasting about their stupid margins. 1 Gb switch is like 10-20 euros meanwhile 2.5 Gbps is like over 100...
2.5Gb is downshifted 10Gb with the same line coding, just with 1/4 the symbol rate.
This means that it inherits all the complexities of 10GbE, while tolerating cheaper connectors and cables.
10GbE uses DSQ128 PAM-16 at 800Msym/s. 2.5G just does quarter-rate at 200Msym/s.
1000BaseT uses trellis coded PAM-5, a significantly less complex modulation.
When one factors in the complexity of the line code and all equalisation and other processing in the analog frontend things get expensive.
Copper 10Gb interfaces run very hot for a reason. It takes quite a bit of signal processing and tricks to push 10Gb over copper, at least for any significant distances.
It's not really about can handle, but more is specified to handle at maximum length in a dense conduit.
At shorter lengths, and in single runs, it's always worth trying something beyond what the wiring jacket says. I've run gigE over a run with a small section of cat3 coupled to a longer cat5e run (repurposed 4-pair phone wire), and just recently setup a 10G segment on a medium length of cat5e. The only thing is while I think 2.5G/5G devices do test for wiring quality, the decimal speeds don't, auto-negotiation happens on the 1Mbps link pulses, unmanaged devices can easily negotiate to speeds that won't work, if your wiring is less than spec, you need to be able to influence negotiation on at least one side, in case it doesn't work out.
I can't make heads or tails of your comment. What is scammy about Ethernet and what 'stupid margins' does Ethernet have? It's a networking standard, not a company.
2.5G or even 10G is not that much more expensive and companies making consumer electronics sell it as a considerable premium for what is essentially the same cost difference as making a 8gb vs 16 gb flash drive. Of course, regular internet users don't need more than 2.5G (and couldn't use it in most of the world due to ISP monopolies) so anything faster than gigabit is a target for segmentation.
If you have a gigabit internet connection, then most of the value of 10G comes from data sharing within the intranet, which just never caught on outside of hobbyists. And a 1G switch can still handle a lot of that, You don’t even need 10G for LAN parties, and whether backups can go faster depends on the storage speed and whether you actually care. Background backups hide a lot of sins.
I’m hoping a swing back to on-prem servers will justify higher throughput, but that still may not be the case. You need something big to get people to upgrade aging infrastructure. What would be enough to get people to pay for new cable runs? 20Gb? 40?
Rant aside, I think there is an argument to be made that 2.5gbps switches "should" be cheaper now that 2.5gbps NICs have become fairly commonplace in the mainstream market.
Case in point, I have a few recent-purchase machines with 2.5gbps networking but no 2.5gbps switch to connect them to because I personally can't justify their cost yet.
I suppose I could bond two 1gbps ports together, or something, but I like to think I have other yaks to shave right now.
Personally I went with Mikrotik's 10gb switch but that needed SPF port thingies (which was fine for me, as I was connecting one old enterprise switch via fiber, direct copperering two servers, and using cabled cat7 or whatever for the Mac).
2.5gb is silly in my opinion unless it's literally "free" - you're often better with old 10gb equipment.
> 2.5gb is silly in my opinion unless it's literally "free" - you're often better with old 10gb equipment.
I think 2.5g is going to make it in the marketplace, because 2.5g switches are finally starting to come down in price, and 10g switches are roughly twice the price, and that might be for sfp+, so you'll likely need transceivers, unless you're close enough for DAC. (NIC prices are also pretty good now, as siblings noted. But if you go with used 10G, you can get good prices there too, I've got 4 dual 10G cards and paid between $25 and $35 shipped for each)
Yeah, it's that cost that is the problem. If I'm paying over a hundred bucks for a switch I might as well go higher and consider 10gbps options.
2.5gbps hardware need to come down to at least the $30 to $40 dollar range if they want to make any sense. Otherwise, they'll stay as niche hardware specifically for diehard enthusiasts or specific professionals only.
The problem with 2.5G is that it's not enough of an upgrade over 1G to warrant buying all new switches and NICs to get it. For that matter few home users push around enough data for 10G to be a big win.
IMHO this is why Ethernet has stalled out at 1G. People still don't have large enough data needs to make it worthwhile. See also: the average storage capacity of new personal computers. It has been stuck around 1TB for ages. Hell, it went down for several years during the SSD transition.
2.5gbps is literally 2.5x times the speed of gigabit ethernet, so that's going to be very noticable even for most home users if they do any amount of LAN file sharing.
It's really just the cost that's the problem, because paying 4x to 5x or even 6x times the cost of gigabit hardware for a 2.5x times performance boost doesn't make a lot of sense.
If 2.5gbps peripheral hardware costs would come down I will happily bet they will take off.
Yeah, but you probably have more than one drive RAID'd in that NAS so you will almost certainly get faster transfers (granted: sequential) if ethernet wasn't the bottleneck.
2.5gbps ethernet translates to roughly 250MB/s in real world transfer speeds, that's a lot. Literally over double real world gigabit transfer speeds, and far less likely to bottleneck you.
You are may actually be right, sorry, my rant may have been misguided. "networking standard" doesnt make it free of royalties though, dont/didn't companies pay to use the Wifi protocol?
It's a big loss that wired networking speeds have plateaued but I feel it's more about apps and people adapting to slow and choppy wireless networks that penalise apps leveraging quality connectivity, and stand as bottlenecks in home networks (eg you don't need 10G broadband the wifi will cap everything to slow speeds anyway). And mobile devices that had much smaller screens and memories than computers for a decade+ stalling the demand driven by moore's law.
People buy ethernet for reliable connection and reliable latency (no package drops), and to get 1Gbps. Few consumers have need for more, since internet speeds also rarely exceed 1Gbps.
Sure, anyone with a NAS might like more, but that's a tiny market. And tiny markets lack economy of scale, causing prices to be high.
You can have 10G with eg, Mikrotik at a reasonable price.
One problem with it is that the copper tech is just power hungry. It may actually make sense to go with fiber, especially if you might want even more later (100G actually can be had at non-insane prices!)
Another problem is that it's CPU intensive. It's actually not that hard to run into situations where quite modern hardware can't actually handle the load of dealing with 10G at full speed especially if you want routing, a firewall, or bridging.
It turns out Linux bridge interfaces disable a good amount of the acceleration the hardware can provide and can enormously degrade performance, which makes virtualization with good performance a lot trickier.
You can go fast if you don't do anything fancy with the interface.
If you say, want bridged networking for your VMs and add your 10G interface to virbr0, poof, a good chunk of your acceleration vanishes right there.
Routing and firewalling also cost you a lot.
There are ways to deal with this with eg, virtual functions, but the point is that even on modern hardware, 10G can be no longer a foolproof thing to have working at full capacity. You may need to actually do a fair amount of tweaking to have things perform well.
The other issue is that unless your computer is acting as a router or a bridge, you need to do something with that 10GB data stream. SSDs have only recently gotten fast enough to just barely support reading or writing that fast. But even if you do find one that supports writes that fast a 10GbeE card could fill an expensive 4TB drive in less than an hour. Good luck decoding JPEGs and blitting them out to a web browser window that fast.
Consumer SSDs used to max out at about 550MB/s, some still do. You need a larger and more modern drive to do 1.25GB/s sustained write. Even then buffering can get you.
2.5 inch and M.2 SATA SSDs max out around 550MB/s due to the limits of SATA3 connections which cap out at 6gbps.
M.2 NVME SSDs meanwhile communicate over PCIE, generally using four lanes, and the latest PCIE5 SSDs can do around 15GB/s if I recall. PCIE4 drives can get up to around 7GB/s, and PCIE3 drives up to around 3GB/s.
Other potential bottlenecks can occur with the motherboard chipset, controller, and NAND flash, but details.
TL;DR: Any NVME SSD can saturate a 10gbps ethernet connection.
The core problem is that the Linux kernel uses interrupts for handling packets. This limits Linux networking performance in terms of packets per second. The limit is about a million packets per second per core.
For reference 10GE is about 16 million packets per second at line rate using small packets.
This is why you have to use kernel bypass software in user space to get linerate performance above 10G in Linux.
Popular software for this use case utilize DPDK, XDP or VPP.
You don't need an interrupt per packet, at least not with sensible NICs and OSes. Something like 10k interrupts per second is good enough, pick up a bunch of packets on each interrupt; you do lose out slightly on latency, but gain a lot of throughput. Look up 'interrupt moderation', it's not new, and most cards should support it.
Professionlly, I ran dual xeon 2690v1 or v2 to 9Gbps for https download on FreeBSD; http hit 10G (only had one 10G to the internet on those machines), but crypto took too much CPU. Dual Xeon 2690v4 ran to 20Gbps, no problem (2x 14 core broadwell, much better AES acceleration, faster ram, more cores, etc, had dual 10G to the internet).
Personally, I've just setup 10G between my two home servers, and can only manage about 5-8Gbps with iperf3, but that's with a pentium g2020 on one end (dual core Ivy Bridge, 10 years old at this point), and the network cards are configured for bridging, which means no tcp offloading.
Interrupt moderation only gives a modest improvement, as can be seen from the benchmarking done by Intel.
Intel would also not have gone through the effort to develop DPDK if all you had to do to achieve linerate performance would be to enable interrupt moderation.
Furthermore, quoting Gbps numbers is beside the point when the limiting factor is packets per second. It is trivial to improve Gbps numbers simply by using larger packets.
I'm quoting bulk transfer, with 1500 MTU. I could run jumbo packets for my internal network test and probably get better numbers, but jumbo packets are hard. When I was quoting https download on public internet, that pretty much means MTU 1500 as well, but was definitely the case.
If you're sending smaller packets, sure, that's harder. I guess that's a big deal if you're a DNS server, or voip (audio only); but if you're doing any sort of bulk transfer, you're getting large enough packets.
> Intel would also not have gone through the effort to develop DPDK if all you had to do to achieve linerate performance would be to enable interrupt moderation.
DPDK has uses, sure. But you don't need it for 10G on decent hardware, which includes 7 year old server chips, if you're just doing bulk transfer.
I started to do some cpp development again and google is just giving me wrong/outdated solutions or just unanswered question from SO, while ChatGPT is on point most of the times.