Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I possibly misunderstand your q3 -- if so apologies.

You shouldn't generally run your AI model directly on your web server, but instead run it on a dedicated server. Or just use an inference service like Together, Fireworks, Lepton, etc (or use OpenAI/Anthropic etc). Then use async on the web server to talk to it.

Thanks for pointing our the JS app walkthru mention - I'll update that to remove conda; we don't have have FastHTML up as a conda lib yet! I also updated it to clarify we're not actually recommending any particular package manager.




I've added fasthtml (and its dependencies) to conda-forge, so it's available in conda/mamba now.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: