Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not my experience, but it depends on what you consider "competitive" and "serious".

While it does appear that a large number of people are using these for RP and... um... similar stuff, I do find code generation to be fairly good, esp with some recent Qwens (from Alibaba). Full disclaimer: I do use this sparingly, either to get the boilerplate, or to generate a complete function/module with clearly defined specs, and I sometimes have to re-factor the output to fit my style and preferences.

I also use various general models (mostly Meta's) fairly regularly to come up with and discuss various business and other ideas, and get general understanding in the areas I want to get more knowledge of (both IT and non-IT), which helps when I need to start digging deeper into the details.

I usually run quantized versions (I have older GPU with lower VRAM).

Wordsmithing and generating illustrations for my blog articles (I prefer Plex, it's actually fairly good with adding various captions to illustrations; the built-in image generator in ChatGPT was still horrible at it when I tried it a few months ago).

Some resume tweaking as part of manual workflow (so multiple iterations with checking back and forth between what LLM gave me and my version).

So it's mostly stuff for my own personal consumption, that I don't necessarily trust the cloud with.

If you have a SaaS idea with an LLM or other generative AI at its core, processing the requests locally is probably not the best choice. Unless you're prototyping, in which case it can help.



Why would you waste time in Good models when there are great models?


Good models are good enough for me, meta_x_ai, I gain experience by setting them up and following up on industry trends, and I don't trust OpenAI (or MSFT, or Google, or whatever) with my information. No, I do not do anything illegal or unethical, but it's not the point.


The good local model isn't creating a profile about me, my preferences, my health issues, political leanings, and other info like the "great" Google and OpenAI are most likely doing based on the questions you ask the models. Just imagine if one day there's a data breach and your profile ends up on the dark web for future employers to find.


I understand your concerns.

For me though, this would be all upside because I have largely explored technical topics with language models that would only be impressive to an employer.

At this point, it is like asking what does someone use a computer for? The use cases are so varied.

I can see how it would be interesting for myself to setup a local model just for the fun of setting it up. When it comes down to it for me though it is just so much easier to pay $20 a month for Sonnet that it isn't even close or really a decision point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: