Hacker News new | past | comments | ask | show | jobs | submit login

I posted the Serving Netflix Video Traffic at 800Gb/s and Beyond [1] in 2022. For those who are unaware of the context you may want to read the previous PDF and thread. Now we have some update; quote

> Important Performance Milestones:

2022: First 800Gb/s CDN server 2x AMD 7713, NIC kTLS offload

2023: First 100Gb/s CDN server consuming only 100W of power, Nvidia Bluefield-3, NIC kTLS offload

My immediate question is if the 2x AMD 7713 actually consumes more than 800W of power. i.e More Watts / Gbps. Even if it does, it is based on 7nm Zen 3 and DDR4 came out in 2021. Would a Zen 5 DDR5 outperforms Bluefield in Watts / Gbps?

[1] https://news.ycombinator.com/item?id=32519881




I haven’t read post yet but I used to work in this space @cable. The nice bit with bluefield3 is you can run nginx on the on-board arm cores which have sufficient ram for live hls usecase. You can use liqid pcie fabric and support 20 bluefield nics off a single cpu 1u box. You essentially turn the 1u box into a mid-tier cache for the nics. Doing this I was able to generate 120 gig off each nic off a 1u hp + the pcie fabric/cards. I worked with liqid and hp lab here in Colorado prototyping it. Edit: I ran the cdn edge directly on nic using yocto linix.


Note that your power consumption is more than just the CPU (combined TDP of 2x225W [0]). You also have to consider SSD (16x20W when fully loaded [1]), NIC (4x24W [2]), and the rest of the actual system itself (e.g. cooling, backplane).

[0] https://www.amd.com/en/products/processors/server/epyc/7003-...

[1] I couldn't find 14TB enterprise SSDs on Intel's website, so I'm using the numbers from 6.4TB drives: https://ark.intel.com/content/www/us/en/ark/products/202708/...

[2] I'm not sure offhand which model number to use, but both models that support 200GbE on page 93-96 have this maximum wattage: https://docs.nvidia.com/nvidia-connectx-6-dx-ethernet-adapte...

Or, you can skip all the hand calculations and just fiddle with Dell's website to put together an order for a rack while trying to mirror the specs as closely as possible (I only included two NICs, since it complained that the configuration didn't have enough low-profile PCIe slots for four):

https://www.dell.com/en-us/shop/dell-poweredge-servers/power...

In this case, I selected a 1100W power supply and it's still yelling at me that it's not enough; 1400W is enough to make that nag go away.


Well I am assuming Memory and SSD being the same. The only different should be CPU + NIC since Bluefield itself is the NIC. May be Drewg123 could expand on that ( if he is allowed to )


That is a fair point, as the 2x CPU + 4x NIC are "only" about 550W put together. There's probably more overhead for cooling (as much as 40% of the datacenter's power -- multiplying by 1.5x pushes you just over that 800W number).

That said, being able to do 800G in an 800W footprint doesn't automatically mean that you can drive 100G in a 100W footprint. Not every ISP needs that 800G footprint, so being able to deploy smaller nodes can be an advantage.

Also: I was assuming that 100W was the whole package (which is super impressive if so), since the Netflix serving model should have most of the SSDs in standby most of the time, and so you're allowed to cheat a little bit in terms of actual power draw vs max rating of the system.


That’s not how TDP works.


You're not wrong, but it's still a nominally usable lower bound for the actual power draw of the chip, and a reasonable proxy for how much heat you need to dissipate via your cooling solution.


Another observation, using the host cpu to manage the nvme storage for vod content is also a bottleneck. You can use the liqid 8x nvme pcie cards and address them directly from the bluefield3 arm processes using nvme-of. You are then just limited by pcie 5 switch capacity among the 10-20 shared nvme/bluefield3 cards.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: