Hacker Newsnew | past | comments | ask | show | jobs | submit | jdthedisciple's commentslogin

Ironically this reads like AI slop.

No, it reads like Linkedin post. That said, do we now have to check if the text we wrote doesn't look like something AI generated?

You're absolutely right.

If its a problem for you, then yeah. If you never get accused of using AI then no.

Um, I got called on HN three times now accused of being AI for writing comments by hand.

I got so annoyed at the second time that I even created a post about it. I guess I just get really annoyed when someone accuses me who writes things by hands as AI slop because it makes me feel like at this point, why not just write it with AI but I guess I just love to type.

I have unironically suggested in one of my HN comments that I should start making the grammatical mistakes I used to make when I had just started using HN like , this mistake that you see here. But I remember people actually flipping out in comments on this grammatical mistake so much that It got fixed.

I am this close to intentionally writing sloppy to prove my comments aren't AI slop but at the same time, I don't want to do this because I really don't want to change how I write just because of something what other people say imo.


Don't kid you'reself, people LOVE grammatical and spelling errors. It's low entry, and by far the easiest way to get someone to interact with what you have written.

AI deprives them of this.

Why even read something with no mistakes? Just scan on to the next comment, you might get a juicy "your/you're" to point out if you don't waste time reading.


That's EXACTLY what an AI member of this community would say!

I know you're secretly a bot, because you used punctuation. Only AI uses punctuation!

/s


xD

that /s is carrying the whole message haha

but yeah I guess, sometimes I wonder if suppose a bot was accused of being AI, I mean if trained with right prompt and everything, it can also learn to flip out and we would be able to genuinely not trust things.

I guess it can be wild stuff but currently I just really flip out while literally just being below the swear level to maintain decency (also personally I don't like to swear ig) to then find that okay I am a human after all.

But I guess I am gonna start pasting this youtube video when somebody accuses me of being AI

I am only human after all: https://www.youtube.com/watch?v=L3wKzyIN1yk

It would be super funny and better than flipping out haha xD

"Got no way of prove it so maybe I am lying but I am only human after all, don't put the blame on me, Don't put the blame one me" with some :fire: emoji or something or not lmaoo. It would be dope, I am now waiting (anticipating out of fun) for the next time when I comment something written by me (literally human lmaoo) and someone calls me AI.

The song is a banger too btw so definitely worth a listen as well haha


Genuinely curious, what felt off? Ideas are mine, AI just helped clean up the English (I added a disclaimer)

The writing style just has several AI-isms; at this point, I don't want to point them out because people are trying to conceal their usage. It's maybe not as blatant as some examples, but it's off-putting by the first couple paragraphs. Anymore, I lose all interest in reading when I notice it.

I would much, much, much rather read an article with imperfect English and mistakes than an LLM-edited article. At least I can get an idea of your thinking style and true meaning. Just as an example - if you were to use a false friend [1], an LLM may not deal with this well and conceal it, whereas if I notice the mistake, I can follow the thought process back to look up what was originally intended.

[1] https://en.wikipedia.org/wiki/False_friend


For me it's a general feel of the style, but something about this stands out:

>We're not against AI tools. We use them constantly. What we're against is the idea that using them well is a strategy. It's a baseline.

The short, staccato sentences seem to be overused by AI. Real people tend to ramble a bit more often.


It reads like an Apple product page.

Most of the subheadings starting with "The" and "What Actually" is a bit of a giveaway in my view.

Not exclusive to AI, but I'd be willing to bet any money that the subheadings were generated.


> Using them isn't an advantage, but not using them is a disadvantage. They handle the production part so we can focus on the part that actually matters: acquiring the novel input that makes content worth creating.

I would argue that using AI for copywriting is a disadvantage at this point. AI writing is so recognisable that it makes me less inclined to believe that the content would have any novel input or ideas behind it at all, since the same style of writing is most often being used to dress up complete garbage.

Foreign-sounding English is not off-putting, at least to me. It even adds a little intrigue compared to bland corporatese.


It did not feel off at all. I read every single word and that is all that counts.

I think what you are getting wrong is thinking that the reader cares about your effort. The reader doesn't care about your effort. It doesn't matter if it took you 12 seconds or 5 days to write a piece of content.

The key thing is people reading the entirety of it. If it is AI slop, I just automatically skim to the end and nothing registers in my head. The combination of em dashes and the sentence structure just makes my mind tune it out.

So, your thesis is correct. If you put in the custom visualization and put in the effort, folks will read it. But not because they think you put in the effort. They don't care. But because right now AI produces generic fluff that's overly perfectly correct. That's why I skip most LinkedIn posts as well. Like, I personally don't care if it's AI or not. But mentally, I just automatically discount and skip it. So, your effort basically interrupts that automatic pattern recognition.


You admit it yourself here:

> I run a marketing agency. We use Claude, ChatGPT, Ahrefs, Semrush. Same tools as everyone else. Same access to the same APIs.

Since you use it for your job of course you use it for this blog, and that will make people look harder for AI signs.


> AI just helped clean up the English

Why?

I get using a spell checker. I can see the utility in running a quick grammar check. Showing it to a friend and asking for feedback is usually a good idea.

But why would you trust a hallucinogenic plagiarism machine to "clean" your ideas?


Ironically, everything smells like AI now, even when it's human.

Sometimes it feels slop. Slop shouldn't get a pass just because a human wrote it.

How much of that feeling is false-positive pattern-matching?


Fun fact, the clergy was a crucial part of the coup, backed by CIA. The same people in power now, btw.

Fun fact, the same people who preach democracy to you all day,

plotted and went about to oust one of the most democratically legitimate leaders of his country by night.

Let that sink in for a moment.


I am almost sure that every single person who plotted the 1953 coup is dead. Maybe one of them survives somewhere aged 103 and no longer knowing their name.

Should Macron be judged by what Napoleon III. (or for that matter, I.) did? Surely there is some kind of continuity between those French heads of state, they even fly the some colors and sit in the same palace.


What makes you think the CIA/Mossad fundamentally operate differently today?

Oh btw, since we're on the topic of false flags:

https://en.wikipedia.org/wiki/Lavon_Affair


Because of the sheer incompetence and cruelty of the islamic regime I wonder if Mossad even need to do anything at this point. Islamic regime is doing their work for them to upset the population and destabilize the country.

Did you think that running a dictatorship is a stable endeavor? No foreign intervention even needed when you build your house on sand.


IIRC Iran suffered from the worst brain drain in the world. That alone would doom the ayatollahs in the long term.

It matters less than before. The US is no longer the dominant force it used to be in the 1950s, and the UK (which was part of the anti-Mossadegh plot) is completely gone from the world stage.

The world of 2026 cannot be reduced to a CIA/Mossad theatre where everyone else is a NPC and must suffer whatever they cook up there. Other people have agency and do their things. EU, India, China, Iran, Russia, Qatar, all influential players.


Well, whatever you'd like to believe, of course.

When it comes to value for money/size, Qatar alone has a lot more influence than the US. Recently it forced the EU to relax its ESG standards in exchange for gas imports.

Sure some people love to live in the past, but it is not the past anymore, of course.

Trump chickening out of every world confrontation is a nice example of the diminishing capability of the US to bend the rest of the world to its will. US can probably keep its influence in Latin America, but in the Old World, the balance of power has shifted.

Is Trump de facto more powerful than Mohammad bin Salman? IDK.


I’m was under the impression that this was a well known fact. Let what sink in? What are you trying to say?

Just busy being edgy I guess. There's nothing fun about the fact either.

Whenever I see mentions of the protesters asking for the Sha to come back, I can't but to worry for the future of Iran even if the protests succeed.

I never understood why some people get so fixated on one event in 1953, as if nothing else mattered after that.

Sure, it had a nontrivial effect. But it also happened in a time when Stalin and Churchill were still alive, there were 6 billion people fewer on the planet and the first antibiotics and transistors barely entered production. Korea was poorer than Ghana etc.

It is 2026, three generations have passed, and not everything can be explained and excused by a 1953 event forever. But it is convenient for autocracy advocates in general.

It reminds me of the worship of the Great Patriotic War in Russia. Again, as if nothing that happened later matters.


The question is, how can you be sure anything you see in the (controlled) news is not another instance of covert plots, false flags, and psyops [0]?

How, precisely how?

[0] https://en.wikipedia.org/wiki/Psychological_operations_(Unit...


How can I be sure that you aren't a bot or vice versa?

Don't worry, you're not the only person who can't answer this question.

Nobody can, that I know of.


I don't worry much here, given that HN isn't a very lucrative space to infest with bots. We will hold out for a few years here.

I am no longer on Facebook or Twitter/X, where that question is very relevant.


Hn is loaded with bots and this thread in particular is full of things that somehow have less political literacy than the typical American 8th grader.

That is not really rare among engineers. Being able to write code does not require much political literacy, and I met more than a few political illiterates who were decent coders. In person, no bots.

The current Ayatollah bullshit cannot be explained without that coup d'état. People flocked to the religious zealots because the alternative was a Western satrap.

Sorta-kinda.

It is a bit like explaining the Communist coup in Czechoslovakia (1948) by the Western betrayal at Munich in 1938. It was a factor. But not The Factor. Just one of many.

In case of Iran, there, too, were other factors at play. The general drive of the Shah to be the Iranian Atatürk-like Modernizer, which clashed with the conservative rural population. The abilities of Khomeini, who pursued his goal of overthrowing the monarchy with absolute zeal. (Would Turkey be nowadays a modern state if Atatürk himself faced a similar opponent?) Willingness of France to shelter Khomeini and willingness of some Western intellectuals to fawn over him. Naivete of the Iranian Left that joined Khomeinis movement and hoped to come up on top, only to eventually get slaughtered for being "enemies of God".

Etc.etc. It is somewhat intellectually lazy to just drag out Mossadegh and leave the conversation, like GP did. It also masks other unpleasant facts.

For example, in my opinion, the Western intellectual class of the 1970s made a serious mistake by supporting Khomeini and cannot even bring itself to acknowledge it. I think this was at least as consequential to the eventual birth of the Islamic Republic as the Mossadegh coup. But the more people talk about the latter, the more they tend to forget about the former.


It's the nature of fascist countries to be fixated on the past

timothy snyder describes it as the "politics of eternity"


People in general tend to be nostalgic, but yeah, a specific sort of politician will use it for their own purpose.

So Seth, as presumably a non-farmer, is doing professional farmer's work all on his own without prior experience? Is that what you're saying?

Nobody is denying that this is AI-enabled but that's entirely different from "AI can grow corn".

Also Seth a non-farmer was already capable of using Google, online forums, and Sci-Hub/Libgen to access farming-related literature before LLMs came on the scene. In this case the LLM is just acting as a super-charged search engine. A great and useful technology, sure. But we're not utilizing any entirely novel capabilities here

And tbh until we take a good crack at World Models I doubt we can


I think is that a lot of professional work is not about entirely novel capabilities either, most professionals get the major revenue from bread and butter cases that apply already known solutions to custom problems. For instance, a surgeon taking out an appendix is not doing a novel approach to the problem every time.

In this case the LLM is just acting as a super-charged search engine.

It isn't, because that implies getting everything necessary in a single action, as if there are high quality webpages that give a good answer to each prompt. There aren't. At the very least Claude must be searching, evaluating the results, and collating the data in finds from multiple results into a single cohesive response. There could be some agentic actions that cause it to perform further searches if it doesn't evaluate the data to a sufficiently high quality response.

"It's just a super-charged search engine" ignores a lot of nuance about the difference between LLMs and search engines.


I think we are pretty much past the "LLMs are useless" phase, right? But I think "super-charged search engine" is a reasonably well fitting description. Like a search engine, it provides its user with information. Yes, it is (in a crude simplified description) better at that. Both in terms of completeness (you get a more "thoughtful" follow up) as well as in finding what you are looking for when you are not yet speaking the language.

But that's not what OP was contesting. The statement "$LLM is _doing_ $STUFF in the real world" is far less correct than the characterisation as "super-charged search engine". Because - at least as far as I'm aware - every real-world interaction had required consent from humans. This story including


1) You are right and its impressive if he can use AI to bootstrap becoming a farmer

2) Regardless, I think it proves a vastly understated feature of AI: It makes people confident.

The AI may be truly informative, or it may hallucinate, or it may simply give mundane, basic advice. Probably all 3 at times. But the fact that it's there ready to assert things without hesitation gives people so much more confidence to act.

You even see it with basic emails. Myself included. I'm just writing a simple email at work. But I can feed it into AI and make some minor edits to make it feel like my own words and I can just dispense with worries about "am i giving too much info, not enough, using the right tone, being unnecessarily short or overly greating, etc." And its not that the LLMs are necessarily even an authority on these factors - it simply bypasses the process (writing) which triggers these thoughts.


More confidence isn't always better. In particular, confidence pairs well with the ability follow through and be correct. LLMs are famous for confidently stating falsehoods.

Of course. It must be used judiciously. But it completely circumvents some thought patterns that lead to slow decision making.

Perhaps I need to say it again: that doesn't mean blindly following it is good. But perhaps using claude code instead of googling will lead to 80% of the conclusions Seth would have reached otherwise with 5% of the effort.


> "...a vastly understated feature of AI: It makes people confident."

  Good point. AI is already making regular Joes into software engineers.
Management is so confident in this, they are axing developers/not hiring new ones.

I started to write a logical rebuttal, but forget it. This is just so dumb. A guy is paying farmers to farm for him, and using a chatbot to Google everything he doesn't know about farming along the way. You're all brainwashed.

What specifically are you disagreeing with? I dont think its trivial for someone with no farming experience to successfully farm something within a year.

>A guy is paying farmers to farm for him

Read up on farming. The labor is not the complicated part. Managing resources, including telling the labor what to do, when, and how is the complicated part. There is a lot of decision making to manage uncertainty which will make or break you.


We should probably differentiate between trying to run a profitable farm, and producing any amount of yield. They're not really the same thing at all.

I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money. Running a profitable farm is quite difficult though. There's an entire ecosystem connecting prospective farmers with money and limited skills/interest to people with the skills to properly operate it, either independently (tenant farmers) or as farm managers so the hobby owner can participate. Institutional investors prefer the former, and Jeremy Clarkson's farm show is a good example of the latter.


When I say successful I mean more like profitable. Just yielding anything isn't succesful by any stretch of the imagination.

>I would submit that pretty much any joe blow is capable of growing some amount of crops, given enough money

Yeah in theory. In practice they wont - too much time and energy. This is where the confidence boost with LLMs comes in. You just do it and see what happens. You don't need to care if it doesn't quite work out it its so fast and cheap. Maybe you get anywhere from 50-150% of the result of your manual research for 5% of the effort.


>A guy is paying farmers to farm for him

Family of farmers here.

My family raises hundreds of thousands of chickens a year. They feed, water, and manage the healthcare and building maintenance for the birds. That is it. Baby birds show up in boxes at the start of a season, and trucks show up and take the grown birds once they reach weight.

There is a large faceless company that sends out contracts for a particular value and farmers can decide to take or leave it. There is zero need for human contact on the management side of the process.

At the end of the day there is little difference between a company assigning the work and having a bank account versus an AI following all the correct steps.


> A guy is paying farmers to farm for him

Pedantically, that's what a farmer does. The workers are known as farmhands.


That is HIGHLY dependent on the type and size of farm. A lot of small row crop farmers have and need no extra farm hands.

All farms need farmhands. On some farms the farmer may play double duty, or hire custom farmhands operating under another business, but they are all farmhands just the same.

Grifters gonna grift.

I would say that Seth is farming just as much as non-developers are now building software applications.

trying. until you can eat it, you're just fucking around.

Thats not the point of the original commenter. The point of the original commenter is that he expects Claude can inform him well enough to be a farm manager and its not impressive since Seth is the primary agent.

I think it is impressive if it works. Like I mentioned in a sibling comment I think it already definitely proves something LLMs have accomplished though, and that is giving people tremendous confidence to try things.


> I think it is impressive if it works.

It only works if you tell Claude..."grow me some fucking corn profitably and have it ready in 9 months" and it does it.

If it's being used as manager to simply flesh out the daily commands that someone is telling it, well then that isn't "working" thats just a new level of what we already have with APIs and crap.


It's working if it enables him to do it when he otherwise couldn't without significantly more time, energy, etc.

He's writing it down, so it's also science.

exactly, its science/research, until you can feed people its not really farming.

>until you can feed people

So if I grow biomass for fuel or feedstock for plastics that's not farming? I'm sure there are a number of people that would argue with you on that.

I'm from the part of the country where there large chunks of land dedicated to experimental grain growing, which is research, and other than labels at the end of crop rows you'd have a difficult time telling it from any other farm.

TL:DR, why are you gatekeeping this so hard?


Anyone can be a farmer. I've got veggies in my garden. Making a profit year after year is much much harder.

Can't wait to see how much money they lose.

I'll see if my 6 year old can grow corn this year.


> I'll see if my 6 year old can grow corn this year.

Sure..put it in Kalshi while your at it and we can all bet on it.

I'm pretty sure he could grow one plant with someone in the know prompting him.


Why would you rebase onto master locally in a team environment?

The way to do this are pull requests on the remote.


The best private LLM is the one you host yourself.


Will these issues with Linux ever be overcome?

I want to switch but I just don't feel confident yet, and I wonder how long the "yet" will remain.


It isn't an issue with Linux, it's an issue with the companies that make proprietary software and devices with only windows support. A better world is possible, but you need to accept the fight isn't easy. Switch today.


> Will these issues with Linux ever be overcome?

As a Linux user forced to run Windows at work, I only see issues with Windows ;)

> I want to switch but I just don't feel confident

Live distros make it very easy to dip your toes and try, without committing to anything.

IMO Linux has much better UI options because there are so many choices and freedom.

You can likely find something that looks/works EXACTLY like you’ve always dreamed of - but maybe you have to try a few options to find it.


Here’s a simple decision tree:

Do you run any exotic hardware? Do you run MS Office regularly? Do you run any highly specialized software?

If the answer is no to all those then Linux is worth a shot.


>Will these issues with Linux ever be overcome?

I'd ask you the inverse question: If Linux never got any better than it is currently, what would it take to push you away from Windows? I don't mean this as a challenge, I'm genuinely curious.


Not OP but I have a couple of red lines that if crossed, I would move to Linux: things stop “just working”, and ads/nags/notifications/behaviors that I don’t want cannot be disabled.

Things are very occasionally annoying right now when a new update enables some new idiotic thing but 99.9% of the time things just work.


> Will these issues with "the other side of the road" ever be overcome?

> I want to switch but I just don't feel confident yet, and I wonder how long the "yet" will remain.

For people like you who think like this.

FOREVER

You'll always dream up some reason why this side of the road, is just better.


You know what else kills people?

Cars, planes, food, scooters, sports, cold, heat, electricity, medication, ...


In my experience GPT is uber-careful with health related advice.

Which makes me think it's likely on the user if what you said actually happened...


Please look at the post. This is about a GPT which is designed to give you health advice, with all hallucinations, miscommunication, bad training data, lack of critical thinking (or, any thinking, obviously).


The problem with dr appointments is that too often, physicians dont actually think carefully about your case.

It's like they one-shot it.

This is why I've had my dr change their mind between appointments, having had more time to review the data.

Or I get 3 different experts giving me 3 different (contradicting!) diagnoses.

That's also why I always hesitate listening to their first advice.


I see many comments like this in here. Where is this so common? I'm not from US but I had impression that health-care while expensive, it is good. If I assume most comments come from US then it is just expensive.

I cannot imagine doctor evaluating just one possibility.


Yea I find it a bit condescending. Humans ain't robots, duh!

And the world wouldn't function if everyone operated at the exact same abstraction level of ideas.


The big difference is accountability. An LLM has no mortality; it has no use for fear, no embodied concept of reputation, no persistent values. Everything is ephemera. But they are useful! More useful than humans in some scenarios! So there's that. But when I consider the purpose of conversation, utility is only one consideration among many.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: