The overall system will be powered by 48V DC on a 6-pin Molex Mini-Fit Jr connector, then stepped down to 12V by a 48->12V intermediate bus converter I've previously designed and characterized (https://serd.es/2024/10/15/Intermediate-bus-converter.html)
The 24-port line card has a 15W TDP overall (6.75W datasheet worst-case power consumption for each of the two 12-port PHYs, plus 1.5W added for power supply conversion losses and such). With my current bench setup (11/12 links up on PHY 0 and 2/12 on PHY 1, all linked up at gigabit but with PHY 1 not talking to the FPGA yet), they're happy but warm - 63.8 and 49.5C die temperature respectively, pulling a combined 7.173W (not counting power conversion losses) according to the internal sensors. PCB temperature is 37.2 to 49C at various measurement points.
The entire setup including the FPGA board, IBC, one of the two planned line cards, and some other glue components that won't be in the final switch is pulling 18W and change right now although the FPGA power consumption will go up as I build out more logic. I don't have measurements of just the FPGA's power consumption currently (I could put a current clamp on the 12V cable from the IBC I guess) but the PDU board does have I2C sensors that will measure consumption of the logic board and each line card separately; I just haven't written the firmware to read them yet. I also don't have FPGA temperature readings in the current firmware although last time I checked them via JTAG I had plenty of margin.
As of right now everything is happy with low-profile heatsinks (Wakefield-Vette 960-27-12-D-AB-0 on both the PHYs and FPGA) and passively cooled just sitting on my bench with no fans. I can go taller if increased power consumption or worse airflow in the future dictates, they're nowhere near the point of hitting the top of a 1U chassis.
My plan for the final system is to have air intakes somewhere on the sides and/or the front around the RJ45s and one or more fans exhausting out the back, details TBD based on more design and testing once I have better projections of the overall thermal load.
Very roughly my overall thermal/power budget is 15W per line card = 30W combined, plus no more than 20W for the logic board (probably quite a bit less) with all ports lit up and passing packets, for a total of 50W plus whatever the losses in the IBC are (it's roughly 94% efficient at this load based on previous testing). This is comfortably below the 72W output limit of the IBC, and should be very reasonable to reject from a 1U chassis with fairly basic air cooling, especially since it's not all coming from a single point load like a single-socket server.
The 24-port line card has a 15W TDP overall (6.75W datasheet worst-case power consumption for each of the two 12-port PHYs, plus 1.5W added for power supply conversion losses and such). With my current bench setup (11/12 links up on PHY 0 and 2/12 on PHY 1, all linked up at gigabit but with PHY 1 not talking to the FPGA yet), they're happy but warm - 63.8 and 49.5C die temperature respectively, pulling a combined 7.173W (not counting power conversion losses) according to the internal sensors. PCB temperature is 37.2 to 49C at various measurement points.
The entire setup including the FPGA board, IBC, one of the two planned line cards, and some other glue components that won't be in the final switch is pulling 18W and change right now although the FPGA power consumption will go up as I build out more logic. I don't have measurements of just the FPGA's power consumption currently (I could put a current clamp on the 12V cable from the IBC I guess) but the PDU board does have I2C sensors that will measure consumption of the logic board and each line card separately; I just haven't written the firmware to read them yet. I also don't have FPGA temperature readings in the current firmware although last time I checked them via JTAG I had plenty of margin.
As of right now everything is happy with low-profile heatsinks (Wakefield-Vette 960-27-12-D-AB-0 on both the PHYs and FPGA) and passively cooled just sitting on my bench with no fans. I can go taller if increased power consumption or worse airflow in the future dictates, they're nowhere near the point of hitting the top of a 1U chassis.
My plan for the final system is to have air intakes somewhere on the sides and/or the front around the RJ45s and one or more fans exhausting out the back, details TBD based on more design and testing once I have better projections of the overall thermal load.
Very roughly my overall thermal/power budget is 15W per line card = 30W combined, plus no more than 20W for the logic board (probably quite a bit less) with all ports lit up and passing packets, for a total of 50W plus whatever the losses in the IBC are (it's roughly 94% efficient at this load based on previous testing). This is comfortably below the 72W output limit of the IBC, and should be very reasonable to reject from a 1U chassis with fairly basic air cooling, especially since it's not all coming from a single point load like a single-socket server.