Tech enthusiast will judge ai based on what it gets right, we’re interested in what “can” do. Everyone else will judge ai based on where it fails, they are interested in what “problems” it “does” solve.
> a fluent conversation with a super smart AI that can generate any image/video is mindblowing to me.
They see:
A computer software generally unreliable and unable to accomplish basic tasks
I'd dispute this, as I count myself as a tech enthusiast but I'm an enthusiast for tech which works well. I increasingly find myself having to put up with stuff that doesn't work well, and this AI investment instead of fixing the stuff that Windows is routinely doing to make my working day harder is infuriating.
Also, in my experience, it's the non-tech-enthusiasts who are diving into LLMs because they don't understand what is actually going on and it basically looks like a repeat of the whole thing about ELIZA a few decades ago. Just this time it's vastly more expensive and has to run on a datacentre and can write you an essay instead of just rephrasing your question.
> A computer software generally unreliable and unable to accomplish basic tasks
Yeah specifically to your quote: it's very easy to create some images and video. It's very hard to create exactly what you need if you have specific needs.
It's almost as if content creation is hard! Well that's because it is. You need to know the client, understand their needs, make the content in line with their other visual language etc.
What AI makes easier if for things to look professional. But a real professional doesn't just make it look good but also makes it what you need.
Where AI comes in is as a helper, and for those situations where "good enough" suffices. And there are many of those situations. Many of which would not have had the budget for a real pro to do it anyway.
Obviously that's one of the many usecases an LLM really sucks at. So no, I don't want it there.
But the thing where someone dumps a long email thread on me, for it to summarise, yeah.. Or to do some basic web searches for me (these days it's a lot of work weeding through all the horrible clickbait).
But what we were talking about here was content creation. What I could imagine it could help content creators with is stuff like "remove the background from this photo", stuff like that. No more busywork like tracing photos.
And yes I do think LLMs are overused and dumped in many scenarios where they add no value or even detract. But there are usecases where they can add value too. Just not as many as the hype suggests.
> Tech enthusiast will judge ai based on what it gets right, we’re interested in what “can” do.
Selecting only the cases where something gets something right is nothing to do with what it can do. A random number generator can drive a car if you select only the cases where it does so correctly (and given infinite iterations there will be such cases) but that doesn’t mean it can drive a car in any real sense.
I assume “tech enthusiasts” here means “AI koolaid drinkers”.
Those “AI koolaid drinkers” are a very vocal faction, not only on HN. Too many software companies are cramming “AI” wherever they can, users be damned.
And what my undersensationalized friend do you understand by the word chat?
reply