Hacker Newsnew | past | comments | ask | show | jobs | submit | more lunfard000's commentslogin

Much better as home-servers (450~ euros for a barebone 1340p is a pretty good deal). They have really good IO and linux support.


maybe add SerenityOS to the title? Almost sure that lock_guard is the ideomatic way to do this on cpp, not buying the "forget to lock".


It’s a general C++ trick.


FWIW, I've encountered "forget to lock" in production C++.


Anecdotally, Argentina tried it[0]. The project was sold to Peron as way to create artificial suns.

0:https://en.wikipedia.org/wiki/Huemul_Project


Let's not pretend that is just altruism why people "open-source" their art or code. Most of them do it to get noticed.


The benefits of openness compound irrespective of the motivations of creators.


you could argue that this LLM's will benefit more users than the produced value itself lost on sea of SEO garbage. But my point is that the openness wont stop because people will still need to show their craft for better employment opportunities.


in the other hand this will enable developers without art skills to produce their indie games with a professional-looking art on their own.


not only art, Obsidian is using already for voices[0]. The future is here.

0:https://www.youtube.com/watch?v=YajBa5PO1Hk


man I use C++ on personal projects for opencv (and python perfomance is pretty bad for my use case) but oh god, I don't I could ever get a C++ job, there is a universe of features I won't dare to touch anytime soon. I just use the "modern" and simple stuff and code look pretty good (imo), don't have to deal with gibberish compiler errors (except when std::format goes crazy). std::move is cool though, rvalue and lvalue is bit too much but the performance, clear "giving up" resource and dont having to use pointers is worth it (I just use it for rvalue passing, prob there is more to it).


You could get a C++ job. One of the issues with C++ is most people using it for anything useful only need to know some particular subset, only actually know that subset, and then get bit when the subset they know changes.


Sure, years ago. But today Ethernet is just as scammy as everyone else, we've been stuck at 1 Gbps on consumer grade hardware for more than 15 years. There are claims (unverified ofc) about their executives boasting about their stupid margins. 1 Gb switch is like 10-20 euros meanwhile 2.5 Gbps is like over 100...


2.5Gb is downshifted 10Gb with the same line coding, just with 1/4 the symbol rate. This means that it inherits all the complexities of 10GbE, while tolerating cheaper connectors and cables. 10GbE uses DSQ128 PAM-16 at 800Msym/s. 2.5G just does quarter-rate at 200Msym/s.

1000BaseT uses trellis coded PAM-5, a significantly less complex modulation.

When one factors in the complexity of the line code and all equalisation and other processing in the analog frontend things get expensive. Copper 10Gb interfaces run very hot for a reason. It takes quite a bit of signal processing and tricks to push 10Gb over copper, at least for any significant distances.


> tolerating cheaper connectors and cables

I always find the graphic below handy for telling which Cat cable can handle which Gig speed:

* https://en.wikipedia.org/wiki/Ethernet_over_twisted_pair#Var...


It's not really about can handle, but more is specified to handle at maximum length in a dense conduit.

At shorter lengths, and in single runs, it's always worth trying something beyond what the wiring jacket says. I've run gigE over a run with a small section of cat3 coupled to a longer cat5e run (repurposed 4-pair phone wire), and just recently setup a 10G segment on a medium length of cat5e. The only thing is while I think 2.5G/5G devices do test for wiring quality, the decimal speeds don't, auto-negotiation happens on the 1Mbps link pulses, unmanaged devices can easily negotiate to speeds that won't work, if your wiring is less than spec, you need to be able to influence negotiation on at least one side, in case it doesn't work out.


I can't make heads or tails of your comment. What is scammy about Ethernet and what 'stupid margins' does Ethernet have? It's a networking standard, not a company.


2.5G or even 10G is not that much more expensive and companies making consumer electronics sell it as a considerable premium for what is essentially the same cost difference as making a 8gb vs 16 gb flash drive. Of course, regular internet users don't need more than 2.5G (and couldn't use it in most of the world due to ISP monopolies) so anything faster than gigabit is a target for segmentation.


The market at work. There is just no real demand for anything beyond 1G.

The HN crowd is not representative of what would be needed to drive the price tags down on 2.5G stuff.


If you have a gigabit internet connection, then most of the value of 10G comes from data sharing within the intranet, which just never caught on outside of hobbyists. And a 1G switch can still handle a lot of that, You don’t even need 10G for LAN parties, and whether backups can go faster depends on the storage speed and whether you actually care. Background backups hide a lot of sins.

I’m hoping a swing back to on-prem servers will justify higher throughput, but that still may not be the case. You need something big to get people to upgrade aging infrastructure. What would be enough to get people to pay for new cable runs? 20Gb? 40?


Rant aside, I think there is an argument to be made that 2.5gbps switches "should" be cheaper now that 2.5gbps NICs have become fairly commonplace in the mainstream market.

Case in point, I have a few recent-purchase machines with 2.5gbps networking but no 2.5gbps switch to connect them to because I personally can't justify their cost yet.

I suppose I could bond two 1gbps ports together, or something, but I like to think I have other yaks to shave right now.


You can get some basic switches that do 2.5gb but it's like $100, a bit more for a brand you might recognize.

https://www.amazon.com/5-Port-Multi-Gigabit-Unmanaged-Entert...

Personally I went with Mikrotik's 10gb switch but that needed SPF port thingies (which was fine for me, as I was connecting one old enterprise switch via fiber, direct copperering two servers, and using cabled cat7 or whatever for the Mac).

2.5gb is silly in my opinion unless it's literally "free" - you're often better with old 10gb equipment.


> 2.5gb is silly in my opinion unless it's literally "free" - you're often better with old 10gb equipment.

I think 2.5g is going to make it in the marketplace, because 2.5g switches are finally starting to come down in price, and 10g switches are roughly twice the price, and that might be for sfp+, so you'll likely need transceivers, unless you're close enough for DAC. (NIC prices are also pretty good now, as siblings noted. But if you go with used 10G, you can get good prices there too, I've got 4 dual 10G cards and paid between $25 and $35 shipped for each)


Yeah, it's that cost that is the problem. If I'm paying over a hundred bucks for a switch I might as well go higher and consider 10gbps options.

2.5gbps hardware need to come down to at least the $30 to $40 dollar range if they want to make any sense. Otherwise, they'll stay as niche hardware specifically for diehard enthusiasts or specific professionals only.


The NICs can be had for $20 (pretty sure I saw a $11 one the other day but can't find it right now on mobile).


The NICs are reasonable now, yes. The issue is the thing on the other side of the cable; 2.5gbps switches and routers need to come down in price.


The problem with 2.5G is that it's not enough of an upgrade over 1G to warrant buying all new switches and NICs to get it. For that matter few home users push around enough data for 10G to be a big win.

IMHO this is why Ethernet has stalled out at 1G. People still don't have large enough data needs to make it worthwhile. See also: the average storage capacity of new personal computers. It has been stuck around 1TB for ages. Hell, it went down for several years during the SSD transition.


2.5gbps is literally 2.5x times the speed of gigabit ethernet, so that's going to be very noticable even for most home users if they do any amount of LAN file sharing.

It's really just the cost that's the problem, because paying 4x to 5x or even 6x times the cost of gigabit hardware for a 2.5x times performance boost doesn't make a lot of sense.

If 2.5gbps peripheral hardware costs would come down I will happily bet they will take off.


This assumes that the LAN is the bottleneck. Gigabit ethernet tops out at 120MB/s, which is about the speed of spinning rust on a NAS.


Yeah, but you probably have more than one drive RAID'd in that NAS so you will almost certainly get faster transfers (granted: sequential) if ethernet wasn't the bottleneck.

2.5gbps ethernet translates to roughly 250MB/s in real world transfer speeds, that's a lot. Literally over double real world gigabit transfer speeds, and far less likely to bottleneck you.


But that has nothing to do with Ethernet as such, which isn't a 'company making consumer electronics'.


You are may actually be right, sorry, my rant may have been misguided. "networking standard" doesnt make it free of royalties though, dont/didn't companies pay to use the Wifi protocol?



It's a big loss that wired networking speeds have plateaued but I feel it's more about apps and people adapting to slow and choppy wireless networks that penalise apps leveraging quality connectivity, and stand as bottlenecks in home networks (eg you don't need 10G broadband the wifi will cap everything to slow speeds anyway). And mobile devices that had much smaller screens and memories than computers for a decade+ stalling the demand driven by moore's law.


People buy ethernet for reliable connection and reliable latency (no package drops), and to get 1Gbps. Few consumers have need for more, since internet speeds also rarely exceed 1Gbps.

Sure, anyone with a NAS might like more, but that's a tiny market. And tiny markets lack economy of scale, causing prices to be high.


You can have 10G with eg, Mikrotik at a reasonable price.

One problem with it is that the copper tech is just power hungry. It may actually make sense to go with fiber, especially if you might want even more later (100G actually can be had at non-insane prices!)

Another problem is that it's CPU intensive. It's actually not that hard to run into situations where quite modern hardware can't actually handle the load of dealing with 10G at full speed especially if you want routing, a firewall, or bridging.

It turns out Linux bridge interfaces disable a good amount of the acceleration the hardware can provide and can enormously degrade performance, which makes virtualization with good performance a lot trickier.


> Another problem is that it's CPU intensive.

Are there 10GigE cards that do not do things like IP/TCP offloading at this point?

Offloading dates back to (at least) 2005:

* https://www.chelsio.com/independent-research-shows-10g-ether...

* https://www.networkworld.com/article/2312690/tcp-offload-lif...


You can go fast if you don't do anything fancy with the interface.

If you say, want bridged networking for your VMs and add your 10G interface to virbr0, poof, a good chunk of your acceleration vanishes right there.

Routing and firewalling also cost you a lot.

There are ways to deal with this with eg, virtual functions, but the point is that even on modern hardware, 10G can be no longer a foolproof thing to have working at full capacity. You may need to actually do a fair amount of tweaking to have things perform well.


The other issue is that unless your computer is acting as a router or a bridge, you need to do something with that 10GB data stream. SSDs have only recently gotten fast enough to just barely support reading or writing that fast. But even if you do find one that supports writes that fast a 10GbeE card could fill an expensive 4TB drive in less than an hour. Good luck decoding JPEGs and blitting them out to a web browser window that fast.


>10GB data stream. SSDs have only recently gotten fast enough to just barely support reading or writing that fast.

10gbps (gigabits per second) is not 10GB/s (gigabytes per second).

Specifically, 10gbps is approximately 1.25GB/s or 1250MB/s.


Consumer SSDs used to max out at about 550MB/s, some still do. You need a larger and more modern drive to do 1.25GB/s sustained write. Even then buffering can get you.


That's due to the communication protocol.

2.5 inch and M.2 SATA SSDs max out around 550MB/s due to the limits of SATA3 connections which cap out at 6gbps.

M.2 NVME SSDs meanwhile communicate over PCIE, generally using four lanes, and the latest PCIE5 SSDs can do around 15GB/s if I recall. PCIE4 drives can get up to around 7GB/s, and PCIE3 drives up to around 3GB/s.

Other potential bottlenecks can occur with the motherboard chipset, controller, and NAND flash, but details.

TL;DR: Any NVME SSD can saturate a 10gbps ethernet connection.


TCP/IP offload isn’t the issue.

The core problem is that the Linux kernel uses interrupts for handling packets. This limits Linux networking performance in terms of packets per second. The limit is about a million packets per second per core.

For reference 10GE is about 16 million packets per second at line rate using small packets.

This is why you have to use kernel bypass software in user space to get linerate performance above 10G in Linux.

Popular software for this use case utilize DPDK, XDP or VPP.


You don't need an interrupt per packet, at least not with sensible NICs and OSes. Something like 10k interrupts per second is good enough, pick up a bunch of packets on each interrupt; you do lose out slightly on latency, but gain a lot of throughput. Look up 'interrupt moderation', it's not new, and most cards should support it.

Professionlly, I ran dual xeon 2690v1 or v2 to 9Gbps for https download on FreeBSD; http hit 10G (only had one 10G to the internet on those machines), but crypto took too much CPU. Dual Xeon 2690v4 ran to 20Gbps, no problem (2x 14 core broadwell, much better AES acceleration, faster ram, more cores, etc, had dual 10G to the internet).

Personally, I've just setup 10G between my two home servers, and can only manage about 5-8Gbps with iperf3, but that's with a pentium g2020 on one end (dual core Ivy Bridge, 10 years old at this point), and the network cards are configured for bridging, which means no tcp offloading.

Edit: also, check out what Netflix has been doing with 800Gbps, although sendfile and TLS in the kernel cuts out a lot of userspace, kind of equal but opposite of cutting out kernelspace, http://nabstreamingsummit.com/wp-content/uploads/2022/05/202...


Interrupt moderation only gives a modest improvement, as can be seen from the benchmarking done by Intel.

Intel would also not have gone through the effort to develop DPDK if all you had to do to achieve linerate performance would be to enable interrupt moderation.

Furthermore, quoting Gbps numbers is beside the point when the limiting factor is packets per second. It is trivial to improve Gbps numbers simply by using larger packets.


I'm quoting bulk transfer, with 1500 MTU. I could run jumbo packets for my internal network test and probably get better numbers, but jumbo packets are hard. When I was quoting https download on public internet, that pretty much means MTU 1500 as well, but was definitely the case.

If you're sending smaller packets, sure, that's harder. I guess that's a big deal if you're a DNS server, or voip (audio only); but if you're doing any sort of bulk transfer, you're getting large enough packets.

> Intel would also not have gone through the effort to develop DPDK if all you had to do to achieve linerate performance would be to enable interrupt moderation.

DPDK has uses, sure. But you don't need it for 10G on decent hardware, which includes 7 year old server chips, if you're just doing bulk transfer.


Bulk transfers aren’t that being interesting from a networking perspective.

You gonna have a bad time if you optimize only for the best case scenario.

Even using IMIX is a low bar. The proper way to do things is linerate using small packets.


Most Linux network drives support NAPI since a couple of decades. No panacea of course, but still, far from having one interrupt per packet.


would you mind sharing the go and python results running on your machine too? It is apples to orange comparation otherwise.

EDIT. My results on a 5950x (undervolted)

python3.8.exe test.py

100,000 tasks 139,130 tasks per/s

200,000 tasks 121,905 tasks per/s

300,000 tasks 120,000 tasks per/s

400,000 tasks 114,286 tasks per/s

500,000 tasks 119,403 tasks per/s

600,000 tasks 117,073 tasks per/s

700,000 tasks 130,612 tasks per/s

800,000 tasks 122,488 tasks per/s

900,000 tasks 120,000 tasks per/s

1,000,000 tasks 110,155 tasks per/s

python3.11.exe .\test.py

100,000 tasks 206,452 tasks per/s

200,000 tasks 185,507 tasks per/s

300,000 tasks 186,408 tasks per/s

400,000 tasks 179,021 tasks per/s

500,000 tasks 167,539 tasks per/s

600,000 tasks 177,778 tasks per/s

700,000 tasks 188,235 tasks per/s

800,000 tasks 180,919 tasks per/s

900,000 tasks 168,421 tasks per/s

.\test.exe (go 1.20 compiled)

100000 tasks 2710563.336378 tasks per/s

200000 tasks 3076885.207567 tasks per/s

300000 tasks 3332292.917434 tasks per/s

400000 tasks 3040479.422795 tasks per/s

500000 tasks 2810232.844653 tasks per/s

600000 tasks 3004138.200371 tasks per/s

700000 tasks 2738877.029117 tasks per/s

800000 tasks 2893730.985022 tasks per/s

900000 tasks 3043877.494077 tasks per/s

1000000 tasks 2857992.089078 tasks per/s


I started to do some cpp development again and google is just giving me wrong/outdated solutions or just unanswered question from SO, while ChatGPT is on point most of the times.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: