Hacker Newsnew | past | comments | ask | show | jobs | submit | runarberg's commentslogin

Like you observed the damage of social media is not unique to children. So a more sensible legislation would serve to help everybody from the harm of social media, not just children.

Second, age verification systems have a lot to benefit from a government contract.

Third, social media and ad companies would for sure prefer a blanket ban on children rather then a more careful legislation which e.g. ban targeted advertising, or further regulates social media from harmful patterns.


If I squint my eyes I can maybe picture my self reading parts of 5 different books in a single day. A fiction novel (1), a Japanese textbook (2), a Japanese vocab book (3), a coffee table book I just happen to need a particular trivia from (4), and a mushroom hunting book (5).

Usually I know exactly which book I need for a given occasion: Sitting on a bus for a while = take my fiction; waiting in a ferry line = take my Japanese textbook; going mushroom hunting = mushroom book obv.

I don’t think I’ve ever been at a place where I did bring a book but wished I had brought a different book. And as such I have a hard time seeing the value in being able to access my entire library wherever I want.


I have seen arguments that a lot of your nr. 3 is basically just addiction. You are making the AI slot machine generate stuff for you and you get to have the sense of accomplishment that comes with thinking you created something without putting in any of the work of actually creating something. To the rest of the world this is indistinguishable from your parent’s nr. 1.

Fair point. It's just that his number 1 was "Pointless throwaway content", and I was saying "Well, actually, it's not thrown away but actually used".

You may look at the output and say "Crap!", but the reality is the person using it found value in it.

(To be honest, I used to think "Crap!" to stock photos long before LLMs came on to the scene, so I have little sympathy with stock photo photographers going out of business - those photos exist primarily to attract readers and do not provide any value to the content - they're just like ads in that regard).


No. The coffee shop who isn’t paying an artist $300 is gonna get negative reviews and loose customers and money from their bad business decision[1]. I know I would think twice about ordering at a café which uses AI in their marketing, and I am not the only one.

The coffee shop who cannot afford the $300 for an artist and homebrews their design in Microsoft Word is still doing just as before, the coffee shop which can afford it and still pays an artist is still doing fine. The coffee shop which is paying openAI $5 for stolen art, gets to look as cheap as they are.

1: https://www.sfgate.com/food/article/santa-cruz-restaurant-ai...


So to save the idea of $300 (logo design with "local" talent is never $300, it is only that cheap if you offshore it), they tried to ruin a business that presumably employs multiple LOCAL people full time (way more than $300) with 1 star reviews to "punish it"

This is an internet mob at its worst. Not an example of anything to emulate, in my opinion.


People hate AI, and this is one of very few ways people have to punish AI. It is bound to happen.

And in either case, this example destroys the framing that coffee shop owners are the ones who benefit from the systemic art theft employed by AI companies.


Sure, just like every software company using AI is going to go under and every video game using AI will fail?

I am not sure what you mean. The AI backlash is real, and it has real and obvious effects in the real world, with written articles to prove it.

If you are attempting here to shift the focus away from coffee shops (may I remind you, you were the one who brought that as an example) and into video games or software companies, I simply reject that attempt.

That there exists a software company which uses AI in their product and is not failing has no bearing on the framing on how a coffee shop which is too cheap to pay an artist for their logo does indeed look cheap to it’s customers who will be inclined to give that café a negative review or otherwise avoid said café.


I'm shifting the focus to the reality that exists outside of internet mobs.

99% of people don't recognize AI generated content, and don't particularly care enough to pixel scan every image they see.

You can death grip articles of AI art backlash, but they are all these hyper-narrow one off events. But reality is the general population doesn't really see it or care.[1]

1.https://www.forbes.com/sites/conormurray/2026/04/17/the-no-1...


Also, ignoring training when talking about the environmental costs is bad faith. Without training this image would not exist, and if nobody generating images like these, the training would not happen. So we should really ask, the 10 seconds it took for inference, plus the weeks or months of high intensity compute it took to train the model.

You'd want to compare against the fraction of training attributable to the image

I was hosting a Karaoke event in my town and really went out of my way to ensure my promotional poster looked nothing like AI. I really really really did not want my townfolks thinking I would use AI to design a poster.

My design rules were: No gradients; no purple; prefer muted colors; plenty of sharp corners and overlapping shapes; Use the Boba Milky font face;



I mean: https://imgur.com/a/BYikxEI

The difference is very stark:

- The AI has a hard time making the geometric shapes regular. You see the stars have different size arms at different intervals in the AI version. This will take a human artist longer time to make it look worse.

- The 5-point stars are still a little rounded in the AI version.

- There is way too much text in the AI version (a human designer might make that mistake, but it is very typical of AI).

- The orange 10 point star in the right with the text “you are the star” still has a gradient (AI really can’t help it self).

- The borders around the title text “Karaoke night!” bleed into the borders of the orange (gradient) 10-point star on the right, but only half way. This is very sloppy, a human designer would fix that.

- The font face is not Milky Boba but some sort of an AI hybrid of Milky Boba, Boba Milky and comic sans.

- And finally, the QR code has obvious AI artifacts in them.

Point I’m making, it is very hard to prompt your way out of making a poster look like AI, especially when the design is intentional in making it not look like AI.


I hear what you’re saying and at the same time I don’t agree with some of your criticisms. The gradient, yep, it slipped one in. The imperfect stars? I have seen artists do this forever, presumably intentional flair. The few real “glitches” would be trivial to fix in Photoshop.

But they are very different certainly. ChatGPT generated a poster with a very sleek, “produced” style that apes corporate posters whereas you went with a much more personal touch. You are correct that yours does not look like typical AI.

My point is certainly not that the AI poster is better, only that it’s capable of producing surprising results. With minimal guidance it can also generate different styles: https://imgur.com/a/zXfOZaf

I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better. A year or two ago the poster would have been full of nonsense letters.

> And finally, the QR code has obvious AI artifacts in them.

I wonder if this is intentional, to prevent AI from regurgitating someone’s real QR codes.

ETA: Actually, I wonder how much of the “flair” on human-drawn stars is to avoid looking like they are drag-and-drop from a program like Word. Ironic if we’ve circled back around to stars that look perfect to avoid looking like a different computer generated star.


My point is not that the AI version looks bad (although it does) it is that I hate AI, and so do many people around me. And I hate AI so much, and I know so many people around me hate AI as much, that I am consciously altering my designs such to be as far away from AI as I can. This is the moving from Seattle to Florida after a divorce of creative design.

About the stars. I know designers paint unperfect stars. I even did that in my design. In particular I stretched it and rotated slightly. A more ambitious designer might go further and drag a couple of vertices around to exaggerate them relative to the others. But usually there is some balance in their decisions. AI however just puts the vertices wherever, and it is ugly and unbalanced. A regular geometric shape with a couple of oddities is a normal design choice, but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that.


> I am consciously altering my designs such to be as far away from AI as I can

I don’t think this is a productive choice, but it’s certainly yours to make.

> but a geometric shape which is all oddities is a lot of work for an ugly design. Humans tend not do to that

I find this such an odd thing to say. It’s way easier to draw a wonky star than a symmetrical one. Unless “drawing” here means using a mouse to drag and drop a star that a program draws for you.

Vintage illustrations are full of nonsymmetrical shapes. The classic Batman “POW” and similar were hand drawn and rarely close to symmetrical.


I draw mine in Inkscape (because I like open source more then my sanity) and inkscape has special tools to draw regular geometric shapes. You don‘t need to use those tools, you can use the free draw pen, or the bezier curve tool, or even hand code the <path d="M43,32l5.34-2.43l3.54-0.53" />, etc. But using these other tools is suboptimal compared to the regular geometric tool.

Apart from me, my partner also does graphic design, and unlike me she values her sanity more then open source so she uses illustrator for her designs. In adobe’s walled garden world of proprietary software it is still the same story, you generally use the specific tools to get regular shapes (or patterns) and then alter them after the they are drawn. You don‘t draw them from scratch. If you are familiar with modular analog synthesizers, this is starting with a square wave, and then subtracting to modulate the signal into a more natural sounding form.


> I think the trend to intentionally make stuff look “non-AI” is doomed to fail as AI gets better and better.

What’s the mechanism that makes an AI ‘better’ at looking non-AI? Training on non-ai trend images? It’s not following prompts more closely. Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.

To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art. To be non-AI, art needs to counter all averages and trends that the models are trained on.


> What’s the mechanism that makes an AI ‘better’ at looking non-AI?

I don’t know. Better training data? More training data? The difference over the past year or two is stark so something is improving it.

> Even if that image had no gradients or pointier shapes, it still doesn’t look like it was made by an individual.

The fact that humans are actively trying to make art that does not look like AI makes it clear that AI is not so obvious as many would like to pretend. If it were obvious, no one would need to try to avoid their art looking like AI.

> To your counterpoints, notice that you are apologizing for the AI by finding humans that may have done something, sometime, that the AI just did. Of course! It’s trained on their art.

Obviously.

> To be non-AI, art needs to counter all averages and trends that the models are trained on.

So in order to not look like AI, art just has to be so unique that it’s unlike any training data. That’s a high bar. Tough time to be an artist.


I don't know why you're downvoted, I think that's a reasonable use of AI and it looks pretty good.

Edit: I think I misread what you were saying, but I do think it's a nice poster! I get that design is going to have to avoid doing things that AI does, which is kind of unfortunate, because AI is likely trained on a lot of things that are generally good ideas.


Completely unrelated, but I am curious about your keyboard layout since you mistyped ö instead of - these two symbols are side by side in the Icelandic layout, and the ö is where - in the English (US) layout. As such this is a common type-o for people who regularly switch between the Icelandic and the English (US) layout (source: I am that person). I am curious whether more layouts where that could be common.

This is also a stylistic choice that the New Yorker magazine uses for words with double vowels where you pronounce each one separately, like coöperate, reëlect, preëminent, and naïve. So possibly intentional.

Yes, this is exactly correct, and I will die on this hill. Additionally, I don't like the way a hyphenated "techno-optimism" looks and "technOOPtimism" is a bit too on-the-nose.

That makes sense[1] but it prompts the obvious question: does this style write it as typeö then?

1: Though personally I hate it, I just cannot not read those as completely different vowels (in particular ï → [i:] or the ee in need; ë → [je:] or the first e here; and ö → [ø] or the e in her)


No. Firstly because it is spelled “typo.” Secondly you typically use the diaeresis to tell the reader to not confuse it with a similarly spelled sound or diphthong. So it tells a reader that “reëlect” is not pronounced REEL-ect, “coöperate” is not COOP-uh-ray-t, and “naïve” is not NAY-v.

Because written English makes so much sense normally. God forbid someone has to figure out the ambiguous pronunciation of those particular words. It seems like a silly thing to provide extra guidance on to me.

I suspect the diaresis was intentional, in “New Yorker” style.

https://www.arrantpedantry.com/2020/03/24/umlauts-diaereses-...


Has this happened to you? Or anyone you know? Or do you know of a lawsuit by a label against an artist for making AI music, and a lawsuit by the same artist against an AI detector for flagging a false positive. This story seems extremely unplausible.

Aside, your analogy doesn’t make sense. Horoscopes are generally not in the business of signal detection, and are usually enjoyed by the reader of the horoscope, like any other art. If you had used a sudoku solver your analogy would make a bit more sense.


The comparison to cars is apt given how destructive this technology has been to cities, and how dangerous it is to drivers and non-drivers alike.

But otherwise you are wrong. There has been plenty of successful resistance to technology. For example a many cities, regions, and even entire countries are nuclear free zones, where a local population successfully resisted nuclear technology. Most countries have very strict cloning regulation, to the extent that human cloning is practically unheard of despite the technology existing. And even GMO food is very limited in most countries because people have successfully resisted the technology.

Neither do I think it is normal for people to resist ground breaking technology. The internet was not resisted, neither the digital computer, not calculators. There was some resistance against telephones in some countries, but that was usually around whether to prioritize infrastructure for a competing technology like wireless telegraph.

AI is different. People genuinely hate this technology, and they have a good reason to, and they may be successful in fighting it off.


You may be underestimating the powers of trillions of parameters in a model. With this many parameters overfitting is inevitable. Overfitting here means you are plotting (or outputting) the errors in your data instead of interpolating (or inferring) any trends in the model.

In fact, given this many parameters, poisoning should be relatively easy in general, but extremely easy on niche subjects.

https://www.youtube.com/watch?v=78pHB0Rp6eI


>With this many parameters overfitting is inevitable.

Nope. Go look up double descent. Overfitting turns out not to be an issue with large models.

Your video is from a political activist, not anyone with any knowledge about machine learning. Here's a better video about overfitting: https://youtu.be/qRHdQz_P_Lo


I am not a professional statistician (only a BSc dropout) so I won‘t be able to gain the expertise required to evaluate the claim here: That double descent eliminates overfitting in LLMs.

That said, I see red flags here. This is an extraordinary claim, and extraordinary claims require extraordinary evidence. My actual degree (not the drop-out one) is in Psychology and I used statistics a lot during my degree, it is only BSc so again, I cannot claim expertise here either. But this claim and the abstracts I scanned in various papers to evaluate this claim, ring alarm bells all over. I don‘t trust it. It is precisely the thing that we were told to be aware of when we were taught scientific thinking.

In contrast, this political activist provided an example (an anecdote if you will) which showed how easy it was for an actual scientist to poison LLM models with a made up symptom. This looks like overfitting to me. These two Medium blog posts very much feel like errors in the data set which the models are all to happy to output as if it was inferred.

EDIT: I just watched that video, and I actually believe the claims in the video, however I do not believe your claim. If we assume that video is correct, your errors will only manifest in fewer hallucinations. Note that the higher parameter models in the demonstration the regression model traversed every single datapoint the sample, and that there was an optimal model with fewer parameters which had a better fit then the overfitted ones. This means that trillions of parameters indeed makes a model quite vulnerable to poison.


Almost certainly those weren't even in the training data. They showed up too soon; LLMs are retrained only every 6-12 months.

Instead, the LLM did a web search for 'bixonimania' and summarized the top results. This is not an example of training data poisoning.

>This is an extraordinary claim, and extraordinary claims require extraordinary evidence.

Well, I don't know what to tell you; double descent is widely accepted in ML at this point. Neural networks are routinely larger than their training data, and yet still generalize quite well.

That said, even a model that does not overfit can still repeat false information if the training data contains false information. It's not magic.


> even a model that does not overfit can still repeat false information

A good model will disregard outliers, or at the very least the weight of the outlier is offset by the weight of the sample. In other words, a good model won’t repeat false information. When you have too many parameters the model will traverse every outlier, even the ones who are not representative of the sample. This is the poison.

To me it sounds like data scientists have found an interesting and seemingly true phenomena, namely double descent, and LLM makers are using it as a magic solution to wisk away all sorts of problem that this phenomena may or may not help with.

> Instead, the LLM did a web search for 'bixonimania' and summarized the top results. This is not an example of training data poisoning.

Good point, I hadn’t considered this, Although it is probably more likely it did web search with the list of symptoms and outputted the term from there especially considering the research papers which cited the fictitious disease probably did not include a made-up term in its prompt.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: