Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Humans don't "train" in the sene that statistical models, or neural nets, are trained. We don't have any clear supervision for example, no ground truth.

GANs are a form of unsupervised learning. They don't have "ground truth" either, just lots of existing images which they learn to imitate and to distinguish from other kinds of images not present in the training set. Similarly, humans learn to distinguish natural images from unnatural ones starting from birth, and use that learned feedback to filter the images produced by our imaginations: a natural example of a GAN. Our input is less… focused, and includes non-visual elements, and there are of course other aspects to general intelligence besides visual processing and imagination, but in this area at least we operate on the same basic principles.

> So for instance, you can't expect to train it on images of manga characters and find that it now draws you in the style of Michelangelo. That's what I mean.

Are we talking about GANs here, or humans? A human trained exclusively on manga wouldn't suddenly develop the ability to imitate Michelangelo either. On the other hand, a GAN trained on manga may sometimes produce images which are not recognizably part of the manga style—which could be seen as an entirely new style. (It would help the process along if you included non-manga images in the training set, as a human would have access to those as well. Then different styles of the same scene just become one more dimension in your "manifold" of all possible images.)

Inventing and learning to draw in a new style isn't something that comes spontaneously to humans. It takes a lot of practice both learning what makes the style distinctive and learning to create art in the new style. A GAN has most of the basic elements required to do the same, but we generally don't use it that way. An interesting experiment might be to permute the discriminator to favor specific elements which were not common in the training set and then train the generator to satisfy the altered discriminator.

> Which means, it can't innovate.

What exactly do you mean by "innovate"? To me the word implies intent, which is clearly out of scope for a mere GAN. Intentional behavior would put it in the domain of an artificial general intelligence or AGI. However, generating images which aren't in the training set is just a matter of choosing a point on the "manifold" which doesn't correspond to any of the input images. Though expecting the GAN to spontaneously invent a distinctive and consistent new style which appeals to humans, without being one itself or otherwise being trained in what humans might find appealing, is a bit much IMHO.

The biggest difference remains the fact that this GAN only has manga for its input, which limits its ability to produce anything outside that context. Its whole life is manga and nothing else. Humans have the same issue with creating things completely unrelated to any prior experience, but they have a much larger and more varied pool of experiences to draw from. (And even then humans can easily get stuck in one particular style and find it difficult to change.)



I'm sorry for the confusion I caused with my inexact terminology. What I mean about "ground truth" in the context of this conversation is the images that GANs are trained to reproduce. Supervision doesn't need to come in the form of labels. GANs are weakly supervised but they are given examples of exactly what they need to model. They are trained to reproduce those examples and like you say, they can't be expected to learn to do anything else.

This is a general rule about neural networks, as we have them today: they learn to reproduce their training set. Nothing more, and nothing less.

Humans, now, don't need to see examples of a thing before we can make one. If that were the case, we would never have created all the technology we have, of which there was no previous example. For instance, at some point in our history someone figured out how to carve a hand axe for the first time, ever. That person didn't have any examples to go by. There were no such objects in nature, before that time. Certainly that person had some idea of concepts such as "sharp" or "pointy" or who knows what else, but they had no blueprint for a hand axe. This is what I mean by "innovation".

"Inventing and learning to draw in a new style" is absolutely something that comes spontaneously to humans! That's the entire history of human art: people inventing new ways to express themselves through various art forms. Art would be way too dull if nobody could come up with new things.

But I certainly agree that it's unfair to expect the same kind of innovation from GANs or from other neural nets. However, I think that's the case because neural nets are nothing like humans. But if I understand correctly, you're claiming that how the human mind works and how neural networks, work, is very similar, so I'm confused a bit because in that case you should expect them to have the same abilities as humans do. Sorry if I misunderstand you, but could you clarify? If human creativity is statistical modelling and GANs do statistical modelling (they kiiind of do) then we should expect GANs to be able to do everything that humans can do, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: