Hacker News new | past | comments | ask | show | jobs | submit login

Not really. There is a very fast LCM model preset now, but its still going to be painful.

SDXL in particular isn't one of those "compute light, bandwidth bound" models like llama (or Fooocus's own mini prompt expansion llm that in fact runs on the CPU).

There is a repo focused on CPU-only SD 1.5.




Yeah, llama runs acceptably on my server, but buying a GPU and setting it all up seems really unfun. Also much more expensive than my hobby budget


You don't need a big one, even an old 4GB GPU will massively accelerate the prompt ingestion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: