It’s a valid thought, but it would really be like trying to take pictures of the sky from underwater and using AI to make it look like it was taken from out of water.
This means: the AI has to predict what it is supposed to look like and for that we would need out of water pictures as reference in the first place which we didn’t so far!
And then: even if we have these new out of water pictures as reference, the AI generated ones would still not show what is real, but instead a fiction. The fiction can look believable but it cannot be studied to derive facts from it. It’s like trying to study an AI generated language.
This sounds like my friend who literally believes that buses will go extinct within 3-5 years as every vehicle will self drive. It’s not thought all the way through.
Like, I guess you could run images and tensors through a neural net and see what the weights look like. That might tell you something that the endless pool of astro-grad students missed. Like, maaaaaaaybe you might have backed out dark matter from some strangely weighted neuron, or there might be something lurking in the noise that was missed. But, I really really doubt it.
This means: the AI has to predict what it is supposed to look like and for that we would need out of water pictures as reference in the first place which we didn’t so far!
And then: even if we have these new out of water pictures as reference, the AI generated ones would still not show what is real, but instead a fiction. The fiction can look believable but it cannot be studied to derive facts from it. It’s like trying to study an AI generated language.
This sounds like my friend who literally believes that buses will go extinct within 3-5 years as every vehicle will self drive. It’s not thought all the way through.