Um, I got called on HN three times now accused of being AI for writing comments by hand.
I got so annoyed at the second time that I even created a post about it. I guess I just get really annoyed when someone accuses me who writes things by hands as AI slop because it makes me feel like at this point, why not just write it with AI but I guess I just love to type.
I have unironically suggested in one of my HN comments that I should start making the grammatical mistakes I used to make when I had just started using HN like , this mistake that you see here. But I remember people actually flipping out in comments on this grammatical mistake so much that It got fixed.
I am this close to intentionally writing sloppy to prove my comments aren't AI slop but at the same time, I don't want to do this because I really don't want to change how I write just because of something what other people say imo.
Don't kid you'reself, people LOVE grammatical and spelling errors. It's low entry, and by far the easiest way to get someone to interact with what you have written.
AI deprives them of this.
Why even read something with no mistakes? Just scan on to the next comment, you might get a juicy "your/you're" to point out if you don't waste time reading.
but yeah I guess, sometimes I wonder if suppose a bot was accused of being AI, I mean if trained with right prompt and everything, it can also learn to flip out and we would be able to genuinely not trust things.
I guess it can be wild stuff but currently I just really flip out while literally just being below the swear level to maintain decency (also personally I don't like to swear ig) to then find that okay I am a human after all.
But I guess I am gonna start pasting this youtube video when somebody accuses me of being AI
It would be super funny and better than flipping out haha xD
"Got no way of prove it so maybe I am lying but I am only human after all, don't put the blame on me, Don't put the blame one me" with some :fire: emoji or something or not lmaoo. It would be dope, I am now waiting (anticipating out of fun) for the next time when I comment something written by me (literally human lmaoo) and someone calls me AI.
The song is a banger too btw so definitely worth a listen as well haha
The writing style just has several AI-isms; at this point, I don't want to point them out because people are trying to conceal their usage. It's maybe not as blatant as some examples, but it's off-putting by the first couple paragraphs. Anymore, I lose all interest in reading when I notice it.
I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.
> Using them isn't an advantage, but not using them is a disadvantage. They handle the production part so we can focus on the part that actually matters: acquiring the novel input that makes content worth creating.
I would argue that using AI for copywriting is a disadvantage at this point. AI writing is so recognisable that it makes me less inclined to believe that the content would have any novel input or ideas behind it at all, since the same style of writing is most often being used to dress up complete garbage.
Foreign-sounding English is not off-putting, at least to me. It even adds a little intrigue compared to bland corporatese.
It did not feel off at all. I read every single word and that is all that counts.
I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.
The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.
So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.
I get using a spell checker. I can see the utility in running a quick grammar check. Showing it to a friend and asking for feedback is usually a good idea.
But why would you trust a hallucinogenic plagiarism machine to "clean" your ideas?
I am almost sure that every single person who plotted the 1953 coup is dead. Maybe one of them survives somewhere aged 103 and no longer knowing their name.
Should Macron be judged by what Napoleon III. (or for that matter, I.) did? Surely there is some kind of continuity between those French heads of state, they even fly the some colors and sit in the same palace.
Because of the sheer incompetence and cruelty of the islamic regime I wonder if Mossad even need to do anything at this point. Islamic regime is doing their work for them to upset the population and destabilize the country.
Did you think that running a dictatorship is a stable endeavor? No foreign intervention even needed when you build your house on sand.
It matters less than before. The US is no longer the dominant force it used to be in the 1950s, and the UK (which was part of the anti-Mossadegh plot) is completely gone from the world stage.
The world of 2026 cannot be reduced to a CIA/Mossad theatre where everyone else is a NPC and must suffer whatever they cook up there. Other people have agency and do their things. EU, India, China, Iran, Russia, Qatar, all influential players.
When it comes to value for money/size, Qatar alone has a lot more influence than the US. Recently it forced the EU to relax its ESG standards in exchange for gas imports.
Sure some people love to live in the past, but it is not the past anymore, of course.
Trump chickening out of every world confrontation is a nice example of the diminishing capability of the US to bend the rest of the world to its will. US can probably keep its influence in Latin America, but in the Old World, the balance of power has shifted.
Is Trump de facto more powerful than Mohammad bin Salman? IDK.
I never understood why some people get so fixated on one event in 1953, as if nothing else mattered after that.
Sure, it had a nontrivial effect. But it also happened in a time when Stalin and Churchill were still alive, there were 6 billion people fewer on the planet and the first antibiotics and transistors barely entered production. Korea was poorer than Ghana etc.
It is 2026, three generations have passed, and not everything can be explained and excused by a 1953 event forever. But it is convenient for autocracy advocates in general.
It reminds me of the worship of the Great Patriotic War in Russia. Again, as if nothing that happened later matters.
That is not really rare among engineers. Being able to write code does not require much political literacy, and I met more than a few political illiterates who were decent coders. In person, no bots.
The current Ayatollah bullshit cannot be explained without that coup d'état. People flocked to the religious zealots because the alternative was a Western satrap.
It is a bit like explaining the Communist coup in Czechoslovakia (1948) by the Western betrayal at Munich in 1938. It was a factor. But not The Factor. Just one of many.
In case of Iran, there, too, were other factors at play. The general drive of the Shah to be the Iranian Atatürk-like Modernizer, which clashed with the conservative rural population. The abilities of Khomeini, who pursued his goal of overthrowing the monarchy with absolute zeal. (Would Turkey be nowadays a modern state if Atatürk himself faced a similar opponent?) Willingness of France to shelter Khomeini and willingness of some Western intellectuals to fawn over him. Naivete of the Iranian Left that joined Khomeinis movement and hoped to come up on top, only to eventually get slaughtered for being "enemies of God".
Etc.etc. It is somewhat intellectually lazy to just drag out Mossadegh and leave the conversation, like GP did. It also masks other unpleasant facts.
For example, in my opinion, the Western intellectual class of the 1970s made a serious mistake by supporting Khomeini and cannot even bring itself to acknowledge it. I think this was at least as consequential to the eventual birth of the Islamic Republic as the Mossadegh coup. But the more people talk about the latter, the more they tend to forget about the former.
Nobody is denying that this is AI-enabled but that's entirely different from "AI can grow corn".
Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here
And tbh until we take a good crack at World Models I doubt we can
I think is that a lot of professional work is not about entirely novel capabilities either, most professionals get the major revenue from bread and butter cases that apply already known solutions to custom problems. For instance, a surgeon taking out an appendix is not doing a novel approach to the problem every time.
In this case the LLM is just acting as a super-charged search engine.
It isn't, because that implies getting everything necessary in a single action, as if there are high quality webpages that give a good answer to each prompt. There aren't. At the very least Claude must be searching, evaluating the results, and collating the data in finds from multiple results into a single cohesive response. There could be some agentic actions that cause it to perform further searches if it doesn't evaluate the data to a sufficiently high quality response.
"It's just a super-charged search engine" ignores a lot of nuance about the difference between LLMs and search engines.
I think we are pretty much past the "LLMs are useless" phase, right? But I think "super-charged search engine" is a reasonably well fitting description. Like a search engine, it provides its user with information. Yes, it is (in a crude simplified description) better at that. Both in terms of completeness (you get a more "thoughtful" follow up) as well as in finding what you are looking for when you are not yet speaking the language.
But that's not what OP was contesting. The statement "$LLM is _doing_ $STUFF in the real world" is far less correct than the characterisation as "super-charged search engine". Because - at least as far as I'm aware - every real-world interaction had required consent from humans. This story including
1) You are right and its impressive if he can use AI to bootstrap becoming a farmer
2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.
The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.
You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.
More confidence isn't always better. In particular, confidence pairs well with the ability follow through and be correct. LLMs are famous for confidently stating falsehoods.
Of course. It must be used judiciously. But it completely circumvents some thought patterns that lead to slow decision making.
Perhaps I need to say it again: that doesn't mean blindly following it is good. But perhaps using claude code instead of googling will lead to 80% of the conclusions Seth would have reached otherwise with 5% of the effort.
I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.
What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.
>A guy is paying farmers to farm for him
Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.
We should probably differentiate between trying to run a profitable farm, and producing any amount of yield. They're not really the same thing at all.
I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.
When I say successful I mean more like profitable. Just yielding anything isn't succesful by any stretch of the imagination.
>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money
Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.
My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.
There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.
At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.
All farms need farmhands. On some farms the farmer may play double duty, or hire custom farmhands operating under another business, but they are all farmhands just the same.
Thats not the point of the original commenter. The point of the original commenter is that he expects Claude can inform him well enough to be a farm manager and its not impressive since Seth is the primary agent.
I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.
It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.
If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.
So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.
I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.
It isn't an issue with Linux, it's an issue with the companies that make proprietary software and devices with only windows support. A better world is possible, but you need to accept the fight isn't easy. Switch today.
I'd ask you the inverse question: If Linux never got any better than it is currently, what would it take to push you away from Windows? I don't mean this as a challenge, I'm genuinely curious.
Not OP but I have a couple of red lines that if crossed, I would move to Linux: things stop “just working”, and ads/nags/notifications/behaviors that I don’t want cannot be disabled.
Things are very occasionally annoying right now when a new update enables some new idiotic thing but 99.9% of the time things just work.
Please look at the post. This is about a GPT which is designed to give you health advice, with all hallucinations, miscommunication, bad training data, lack of critical thinking (or, any thinking, obviously).
I see many comments like this in here. Where is this so common? I'm not from US but I had impression that health-care while expensive, it is good. If I assume most comments come from US then it is just expensive.
I cannot imagine doctor evaluating just one possibility.
The big difference is accountability. An LLM has no mortality; it has no use for fear, no embodied concept of reputation, no persistent values. Everything is ephemera. But they are useful! More useful than humans in some scenarios! So there's that. But when I consider the purpose of conversation, utility is only one consideration among many.
reply