Hacker News new | past | comments | ask | show | jobs | submit | kats's comments login

Why did you write this using AI?


Gotta drive traffic to that blog of AI generated posts somehow


I think this person genuinely writes like an llm. Read the rest of their comments.

My llm radar picks it up as well.

A reverse uncanny valley


I think they are simply typing on their phone. On an iPhone, three dots and a space becomes the Unicode ellipsis, two hyphens becomes an em dash, apostrophes and quotes become curvy, letters are capitalized automatically. These things are not only easy to type, but hard not to type. I think they just really, really like em dashes. Not even ChatGPT uses them so frequently.


What made your radar go off? The em-dashes? As a lover/user of em-dashes myself, I'm curious to learn more about what you think "llm text" looks like in your head radar detector unit. :)


Information density. LLMs are great at stringing words together but don't pack ideas tightly.

Also there's the personality. I've talked with chatgpt enough...

I'm open to the entire account being an agent. That's certainly possible.

There's a new market for astroturfing virality. Create hundreds of agents on various sites and have them engage in pablum and occasionally mention your product.

We're entering a phase where you can't just have a dumb model to filter that out


The problem with answering this is that they learn how to sound less like a robot.

Just use -, that helps a lot.


Look at his blog. 0 spelling error, 2 big articles in 1 day. A LOT of —…

This is just an LLM. I would be surprised if this guy writes like this.

Why do you think he’s NOT an LLM?


Wait... not having spelling errors is now a mark of AI?

Am I the only person who proofreads emails anymore?


> Wait... not having spelling errors is now a mark of AI?

When you output long blog articles more than daily, it is. Proofreading takes time, and someone who cares enough to proofread will probably care enough to put in more time on other things that an LLM wouldn't care about (like information density, as noted in another comment; or editing after the fact to improve the overall structure; or injecting idiosyncratic wit into headings and subheadings).


Please take no offense—I genuinely want to understand. I agree that my blog needs work, especially with less fluff and more value—i'm working on that.

I guess where I’m coming from is this: why is it assumed that using tools like AI or Grammarly takes away from the creative process? For me, they speed up the mechanical side of things—grammar, flow, even structure—so I can spend more time on ideas, storytelling, informing, or just getting unblocked.

I do get frustrated when ChatGPT changes my wording or shifts the meaning of what I’m trying to say. It can definitely throw a wrench into the overall story. But in those cases, I rephrase my prompt, asking it not to touch the narrative or my word choices, just to act like a word processor on steroids or an expert editor.

I’m not saying these tools replace a good human editor—far from it. If I ever get to the point where I can work with a real editor or proof reader and so on, I’d choose the human every time. But until then, these tools help me keep the momentum going—and I don’t see that as a lack of care.

On the contrary, it often takes me more time to get the output right—because I’m trying to make sure it still reflects exactly what I want to say and express.

Maybe it’s just a different kind of process?


Even if it’s you pulling the strings, it feels the way it is: a robot talking. It feels fake. Because it is. You’re not unique, so you’ll never stand out either. Just learn grammar on your own, and you’ll retain/add character to the text.

Now you’re just prompting. Just post the prompt, that’d be way more fun to read.


I don't know about his blog since this is a thread about whether or not his comment is AI-generated, but I ran his comment through GPTZero and it reports it's confident the comment is entirely human. I asked Claude to summarize his comment and ran that summary through GPTZero and it reported it was confident that it's entirely AI-generated. Maybe the comment didn't set of my llm radar because I didn't draw conclusions about the comment by looking at the blog, which very well might be 100% AI-generated.


Ok. I’m left speechless—but I can only comment that I’m trying to be genuine—obviously failing at an alarming rate! Yes, my blogs are edited with ChatGPT or whichever AI tool I have open, but my words and experience are my own, for what it’s worth—again, I am not an LLM agent. To be fair, I sometimes think ChatGPT writes like me. Where’s Sam when you need him? (Tasteless joke.)


I hope I don’t get banned from HN—I really like it here. Not kidding.

I write a lot (maybe too much, some might say). I actually spent last weekend writing 10k words for self-help book that just popped in my mind - and yes, i trust me i did more than 2 big articles in one day, just haven't published them yet and to be frank, i'm a little worried now.

For full transparency: yes, I used ChatGPT, Grammarly, and Hemingway to assist with the writing structure, grammar and spelling. Not originality and wording. It just helps me move faster and keeps the flow going.

Will my book be a bestseller? Doubt it—it’s my first. Will anyone read it? No clue. Maybe if it’s free. Was it worth my time instead of coding? Absolutely. It cleared my mind and shifted my focus—something I think everyone should try at least once. So yeah, maybe I do write like ChatGPT... but one could also say ChatGPT writes like me. Anyway, like i said, I hope I don’t get banned—I really do like it here.


"A reverse uncanny valley" I had to look that up, so embarrassing (especially from a tech nerd). Thank you for pointing that out, i will definitely focus less on perfection and be less worried about tyypos from now on— genuinely NOT being sarcastic and sincerely appreciate the feedback.


Is it just the list? I'm curious what specifically sets off your llm radar.


There are so many little things that sets it off. And this… person? sets off 90% of them.


Too many to list even 1?


> Consumers have had it with clunky, slow automotive technology

No. I don't want it. I want Not to have it.

I don't want a touchscreen. I don't want a computer car. And I definitely don't want an internet-connected car.


IMHO, a computer car and even internet-connected car is fine. However, I want a computer car that I actually own. If it's my car that I paid for, I should have full access to the software that runs it. If not, then I don't own the car, I'm just renting it.


Some random thoughts, might be interesting (or not).

Some things that really do fit into tables, where there's no empty fields in the rows:

- Relational databases

- Hardware circuit truth tables

- Assembly language opcodes

- FPGA compiler output

- Matrix multiplication. Or some GPU programs e.g. if you want to have a conditional statement using multiply-add, then 'if (cond) then x1 else x2' would be 'out = (cond * x1) + (cond * x2)'.

Those things have good performance in one way or another. But it's not easy to make all the application logic fit into a table-based schema.

e.g. Why does someone choose a python web server framework which sends and receives JSON. It's really super easy to extend and add something without knowing what you'll need in advance.

But if you try to fit that into a table, you'll have to change the table schema for everything to try to best fit what the program does right at the moment. And then if there's one new extra complex or long-running function, it will never easily fit the same schema as the others. You'll have to break the big function down into some smaller common operations.


Google is making a huge mistake. They are clearly getting scammed, the price is up to $32B from $23B less than a year ago.

There is no pressure or need to buy Wiz.


Wiz has no brand, no one knows who they are.

Revenue from Wiz's customers will not make back $32 billion dollars even in 30 years.

Wiz's technology is irrelevant. I think Google already scans for vulnerabilities and misconfigurations. And can build similar for low millions of dollars.


Plenty of people know who they are and have for quite a while.


A few years is not quite a while.


Yeah, but Instagram and WhatsApp have billions of users. Everybody has heard of them. Advertising on Instagram generates revenue.

Wiz is a SaaS b2b startup. Even on a forum for startups most people haven't heard of them.

Wiz reportedly has a revenue of 750m. It would take Google 30 years or more to break even on this deal. But like all bs startups Wiz will fade into irrelevancy 6 months after being acquired.

Google is getting completely scammed.


Nobody thought Instagram and WhatsApp were good acquisitions at the time.


Instagram was roughly 10 people when it got bought, had less than 30M users and $0 in revenue.


This: "But like all bs startups Wiz will fade into irrelevancy 6 months after being acquire"


Don't do it!


Great!


> a nonexclusive, royalty-free, worldwide license for the purpose of doing as you request with the content you input in Firefox.

This is super bog-standard nothing-special stuff.


AI voice is an overwhelmingly harmful technology. It's biggest use will be to hurt people.


It will unfortunately undoubtedly be used for mass automation of scams but text AI (and pre-AI automation) have been used for that for many years as well. Doesn't really make sense to say "ok we should allow all forms of AI besides voice because of scams", I think.

But yes, there needs to be some spreading of public awareness.


That's if you answer phone calls from numbers not already in your contacts. For me all such numbers go to voicemail and if the voice is of someone i know ill just call them directly.

If you do any of the above you are looking to be scammed!


Oh yes ? The scambot will leave a distress message and a number in your voicemail, using the voice of a relative. You would know better but I guarantee old people will call the number and strike a convo with the virtual relative.


Erm, no. Its biggest use will be... https://www.youtube.com/watch?v=LTJvdGcb7Fs :-)))


Cue all the responses saying "it's already been possible to harm people, AI doesn't fundamentally change anything, nothing to worry about"


Counter point: We were barely doing anything about it when bad actors were pwning people pre-AI, like with social media propaganda or romance scams.

And if we still do nothing about it post-AI? Well, that is already the status quo, so caring now feels performative unless we're going to finally chit chat about solutions.

The same could be said for the internet. "The internet can be used for bad" is an empty, trivial claim, not an insight that needs a standing ovation. The conversation we need is what to do about it. And the solutions need to be real ones, not "we need to put the cat back in the bag".


I unfortunately agree with you. Old people with confusion/dementia, schizoid types, or very naive persons will fall for shattering scams. And the consequences on their grasp on reality will be terrible.


Doubt it. Its biggest use will be voice assistants.


Nope. Awareness will inoculate people. “Authenticating” someone via the mere sound of their voice was always broken, anyway… Ever see the great movie Sneakers (1992)?


Do you live in reality? Because that clearly isn’t happening.


And phones enable scams. So your idea is to… Abandon all telephony??

You should not judge a tool by the worst use someone can come up with it


It's the most common use by far.


There's a bunch of youtubes that seem to use it, so there's that


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: