Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So in the old model you could: 1. pay for compute 2. charge the customers to pay for compute,

and now you can instead: 1. pay your customers to pay for compute 2. charge the customers to pay for the customers to pay for compute

Is there something I'm not understanding in the business logic of this?

Is it the fact that this would be running on computers that are essentially free, since it would just be like the desktop in someone's home office, so the infrastructure costs are already paid for (e.g. externalized)?

Or like would the value here be accessing the LLM service for 'free'? But isn't just paying for a service like OpenAI relatively inexpensive and already nicely set up?




But isn't just paying for a service like OpenAI relatively inexpensive and already nicely set up?

Sure, but OpenAI is never going to offer you a raw product. Their offerings will always be the heavily restricted, corporatized product they offer now. That works for many, maybe most, people but there's definitely a market for a "power to the users" LLM AI with no rules.


> Is there something I'm not understanding in the business logic of this?

That people would rather give away some of the GPU time they aren't using at this moment than pay subscription. And presumably also not wanting to be beholden to whatever filters the "big AI cluster owner" puts in place




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: