Hacker Newsnew | past | comments | ask | show | jobs | submit | jangxx's commentslogin

Autocorrect is not generative AI in the way that anyone is using that word. Also autocorrect doesn't even need to use any sort of ML model.

Ah yes the duality of anti-AI crowds on HN. “GenAI is just fancy autocorrect”, and “autocorrect isn’t actually GenAI”.

The thing is, if you’re talking about making laws (as GP is), your “surely people understand this difference” strategy doesn’t matter squat and the impact will be larger than you think.


You don't seem to understand what people mean when they say "AI is just fancy autocorrect". People talk about the little word suggestions over the keyboard, not about correcting spelling. And yes, of course those suggestions are going to be provided by some sort of ML model, and yes if you actually write a whole article just using them, it should be marked as AI generated, but literally no one is doing that. Maybe because it's not fancy enough autocorrect. Either way, this is not the gotcha you think.

But the original poster said:

>Even if AI contributed just a tiny bit.

Which would imply autocorrect should be reported as AI use.


A law like this would obviously need some sort of sensible definition of what "AI" means in this context. Online translation tools also use ML models and even systems to unlock your device with your face do, so classifying all of that as "AI contributions" would make the definition completely useless.

I assume the OP was talking about things like LLMs and diffusion models which one could definitely single out for regulatory purposes. At the end of the day I don't think it would ever be realistically possible to have a law like this anyway, at least not one that wouldn't come with a bunch of ambiguity that would need to be resolved in court.


OK, so define it for us, please. Because, once again, this thread is talking about introducing laws about "AI". OP was talking about LLMs you say - So SLMs then are fine? If not, then where is the boundary? If they're fine then congratulations you have created a new industry of people pushing the boundaries of what SLMs can do, as well as how they are defined.

Laws are built on definitions and this hand-wavy BS is how we got nonsense like the current version of the AI act.


Why are you so mad at me, I'm not even the OP you should ask these questions. I'm also not convinced we need regulation like this in the first place, so I can't tell you where this boundary should be, but a boundary could certainly be found and it would be beyond simple spellchecking autocorrect.

I also don't understand why you think this would be so impossible to define. There are regulations for all kinds of areas where specific things are targeted like chemicals or drugs and just because some of these have incentivized people to slightly change a regulated thing into an unregulated thing does not mean we don't regulate these areas at all. So how are AI systems so different that you think it'd be impossible to find an adequate definition?


ollama can't connect to MCP servers, it can merely run models which output instructions back to a connected system to connect to an MCP server (e.g mcphost using ollama to run a prompt and then itself connecting to an MCP server if the response requires it).


I mean both of these things are actually happening (drone deliveries and people spending a lot of time in VR), just at a much much smaller scale than it was hyped up to be.


Drones and VR require significant upfront hardware investment, which curbs adoption. On the other hand, adopting LLM-as-a-service has none of these costs, so no wonder so many companies are getting involved with it so quickly.


Right, but abstract costs are still costs to someone, so how far does that go before mass adoption turns into a mass liability for whomever is ultimately on the hook? It seems like there is this extremely risky wager that everyone is playing--that LLM's will find their "killer app" before the real costs of maintaining them becomes too much to bear. I don't think these kinds of bets often pay off. The opposite actually, I think every truly revolutionary technological advance in the contemporary timeframe has arisen out of its very obvious killer app(s), they were in a sense inevitable. Speculative tech--the blockchain being one of the more salient and frequently tapped examples--tends to work in pretty clear bubbles, in my estimation. I've not yet been convinced this one is any different, aside from the absurd scale at which it has been cynically sold as the biggest thing since Gutenberg, but while that makes it somewhat distinct, it's still a rather poor argument against it being a bubble.


A parallel outcome for LLMs sounds realistic to me.


If it’s not happening at the scale it was pitched, then it’s not happening.


Considering what we've been seeing in the Russia-Ukraine and Iran-Israel wars, drones are definitely happening at scale. For better or for worse, I expect worldwide production of drones to greatly expand over the coming years.


This makes no sense, just because something didn't become as big as the hypemen said it would doesn't make the inventions or users of those inventions disappear.


For something to be considered “happening” you can’t just have a handful of localized examples. It has to be happening at a large noticeable scale that even people unfamiliar with the tech are noticing. Then you can say it’s “happening”. Otherwise, it’s just smaller groups of people doing stuff.


It does though? I you click on the (i) button, there's a "Libraries" section.


There's also the built-in "Speak Selection" feature you can enable in the accessibility settings.


The article has links to all the mentioned videos.


Back in high school a teacher told us this "fact" as well and I remember being very surprised because it did not match my experience at all. Many times have I tested this theory since, e.g. when waiting at a pedestrian traffic light, looking off to the side so that the light is on the very edge of my peripheral vision, seeing if I can perceive it turn green and always being able to. Of course this is not proof that there are no people who can't do this, but I definitely know that I can see color at the edge of my peripheral vision and I've come to assume that this just varies from person to person.


I haven't heard anything about him in a while, I assume he's still living in exile in Russia? In which case, yes, he might have evaded the US, but he still gave up a lot of his freedom in return.


Then you clearly don't work in healthcare or law.


Don't forget to take both of those stats with a grain of salt though. The US has a lot of gig workers which are not always counted correctly or the same and Germany has a large low-wage sector, where people are employed but earn less per month that they would get in unemployment benefits, and so the state pays the difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: