I did! I started by going to vast.ai. I was able to look at the specs of the top-scoring machines. I started with the motherboard (as I knew it could support my 3090s, because some PCIe busses can't handle all that data). Then of course I copied everything else that I could. I ended up using PCIe extenders and zip-tieing (plastic, I should use metal zip ties instead) the cards to a rack I got from Lowes. I'm not too pleased with how it looks, but it works!
BTW, depending on where you're at in your ML journey, Jeremy Howard from FastAI says you should focus more on using hosted instances like paperspace until you really need to get your own machine. Unless, of course, you enjoy linux sysadmin tasks. :) It can get really annoying trying to match the right version of CUDA with the version of Pytorch you're trying to get running for the newest model you're trying.
BTW, depending on where you're at in your ML journey, Jeremy Howard from FastAI says you should focus more on using hosted instances like paperspace until you really need to get your own machine. Unless, of course, you enjoy linux sysadmin tasks. :) It can get really annoying trying to match the right version of CUDA with the version of Pytorch you're trying to get running for the newest model you're trying.