Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a common misunderstanding. The thing knows how a naked woman looks, also knows how a child looks, it puts two and two together and voila. It doesn't need to be trained on the real thing to be able to generate it.


Well, maybe. Maybe not. See "Why can't ChatGPT draw a full glass of wine?"

https://youtu.be/160F8F8mXlo

Diffusion models do posses some capability to synthesize ideas, but that capability does not necessarily generalize to every possible use case. So it's impossible to say for certain that that is what is happening.


We can get more certainty by testing combinations of those concepts with a whole bunch of other ones. Naked skateboarder. Child construction worker. It has a lot more variety for both of those concepts than with wine glasses.

We can also check models that have very highly vetted input sets.


That video makes some good observatons, but it's also hilarious that he tried to "retrain" ChatGPT by asking it in the chat to remove some items from its dataset.


Does that necessarily follow? Wouldn't that be prone to outputting small naked adult women, and/or naked children with boobs?


Why not? I am pretty sure there is no training data of “whales playing the guitar”, but if you ask a model to draw one, it will do a respectable job of imagining that scenario.



If it is trained on well-tagged images of adult men and women both clothed and unclothed, and clothed children (not that all pictures of unclothed children are CSAM to start with), understanding the relation of clothed to unclothed appearance could allow a model to reasonably generalize unclothed child bodies.

Further, models that are otherwise well trained with a mix of photographic and drawn content can often generalize specific concepts for which their training only includes examples from drawn content to photorealistic imagery involving that concept.


I don't believe that is true. A woman and a child have distinct characteristics that are not interchangeable. A child for example can be detected by the shape of the nose and nostrils as just one data point. There are many more data-points that psycho-analysts use to determine if a person is attracted to children. AI would have to understand quite a bit of biology and understand how humans develop to get this right.


Image generation models are perfectly capable of mixing different concepts to create images of things they're incredibly unlikely to have seen during training.


That sounds like a hit-or-miss concept to me. Without logic that teaches the AI why a child and an adult look the way they do it would maybe sometimes get it right by chance and other times not. I do not see how one could guarantee an outcome with the logic of the generation models unless it understands biology or was trained on real people which advertisements have no shortage of. Advertisements lack nudity and would leave out details like a hymen.


You can have artist draw art with the differences. And then you can get it via style transfer. But there are already many images of children with noses.


This is not a matter of opinion. Go to any AI image generator and tell it to generate whatever you want


Opinions aside the question is how it learned to draw details about a child that do not exist on an adult and vice versa.


Statistically, you'd expect this to result in depictions of children with public hair - some adults opt to get rid of theirs, but most have it. Are you sure you're not projecting your prior knowledge about human biology onto an image transformer model?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: